The software verification strategy is the basis for all activities in the software unit verification process and is therefore also the basis in an assessment. The software verification strategy is required by Base Practice 1: Develop Software Unit Verification Strategy including Regression Strategy.
[This is Part 2 of a three-part series of articles. You can check out Part 1 and Part 3 here.]
For an assessor a unit verification strategy must include at least the following 10 aspects:
1. Definition of all units. The definition can be generic or specific. Make sure that units are uniquely identifiable. In the simplest case, there is a list of functions or files that are classified as units.
- You should be able to answer the following question: how do you ensure that all units are included in the list of functions? This can be done, for example, by periodically checking the list or through automated updating of the list.
2. Definition of how specific requirements related to verification and testing are covered. This means functional, non-functional and process requirements.
- You should have an overview of what requirements there are for the entire project. Supplement this with the information that has an impact on unit verification. These are generally also requirements from Automotive SPICE, ISO26262 or other safety standards, cross-sectional load booklets, laws, from stakeholders, MISRA, etc.). It can be helpful if you explicitly include individual requirements in the verification strategy and briefly document your solution for implementation.
3. Definition of methods for the development of test cases and test data derived from the detailed design and non-functional requirements.
- The requirements should explain which methods you use for this, e.g. forming equivalence classes for all interfaces, positive & negative tests, etc.).
- If you have generic unit definitions, you will probably use generic definitions for this as well. If you have constraints/variants for example for QM and functional safety units, the expectation is that they can also show an overview of QM and functional safety units. This expectation applies analogously to all other variants. A generic unit definition can thus increase the test effort.
- To deal with this aspect, we recommend a prior analysis of all requirements and a derivation of the most suitable methods based on this analysis.
4. Definition of methods for the methods and tools for static verification and reviews.
5. Definition for each test environment and for each test methodology used.
- Off-the-shelf tools implement methodologies. Refer to existing tool vendor documentation to save time.
- Use tools that master as many methods and technologies as possible. Save project costs for training and licenses. With a few tools that can be widely used, employees can be re-prioritized more quickly and familiarization with tooling is no longer necessary.
- Use established methods, such as equivalence class or limit tests for test data collection.
- Use tools that relieve you of the maximum amount of work for recurring activities, e.g. by automatically generating reports and traceability.
- Automate as much as possible.
6. Definition of the test coverage depending on the project and release phase.
- Nobody expects you to reach 100% coverage on day 1. Use the duration of the project and show achievable build curves.
- Derive what you need for this in terms of personnel or other resources.
- Review your strategy and adjust it if there are deviations. Make changes according to the process (SUP.10 Change Request Management).
7. Definition of the test start conditions and test end criteria for dynamic unit tests.
- Which conditions lead to the start of which activities.
- Are there dependent sequences?
- When do they terminate, when do they restart? How do they get this?
- When do they stop testing? It is best not to use temporal, but technical or measurable criteria (coverage metrics, how all requirements are tested). Argue why these metrics are sufficient.
8. Documentation of sufficient test coverage of each test level, if the test levels are combined.
- If you combine test levels, you must justify how you determine the level of coverage. Coverage can mean code coverage, interface coverage, and requirements coverage. A coherent rationale would be, for example, that you move test content to higher levels because you can assign test cases and requirements more meaningfully at this level.
- They often get coverage targets from standards and other guidelines. ISO 26262 sets targets for code coverage of safety-related code portions. ISO 26262 implicitly requires high coverage with the following note: “No target value or a low target value for structural coverage without justification is considered insufficient.”
- In general, it is best to substantiate all coverage target values below 100%. This can most easily be done using release schedules and predetermined prioritizations of requirements or features.
- Pro-tip: Reference or link relevant requirements from the source to the appropriate section in the software unit verification strategy.
9. Procedure for dealing with failed test cases, failed static checks, and check results.
- This procedure should relate to the ASPICE Problem Resolution Management Strategy (SUP.9) process and be consistent.
- You should describe who is informed, as well as how and when to do what.
- You should also describe what information/data you will share in the process.
10. Definition for performing regression testing.
- Regression testing refers to the re-execution of static and dynamic tests after changes have been made to a unit. The goal is to determine whether unchanged portions of a unit continue to work.
- In automated testing, a regression test is done at the push of a button.
- In Continuous Integration / Continuous Testing environments, it is sufficient to indicate that regression testing is ensured by “nightly builds” or other automatisms.
Notes on the assessment.
If you do not cover all 10 aspects mentioned above in a Software Unit Verification Strategy, you must expect not to receive the assessment “Fully” for BP1 “Develop Software Unit Verification Strategy including regression Strategy”. Not fulfilling points 2 till 4 will result in them being rated Partly or worse for BP1.
Implicitly, the assessor also expects that all personnel involved in the process have knowledge of the contents of the Software Unit Verification Strategy. If they do not have evidence, e.g. in the form of mails, logs or similar, it may happen that a tester is called into the assessment and their knowledge is determined in an interview.
In Automotive SPICE, the higher-level Work Product Verification Strategy (WP ID 19-10) is characterized in more detail. It requires scheduling of activities, handling of risks and constraints, degree of independence in verification and other aspects for a verification strategy.
Don’t miss the deep dive into developing criteria for unit verification: Follow Us on LinkedIn.