Close

Define your Verification Criteria correctly to pass your next Automotive SPICE process Software Unit Verification.

Part 3 Automotive SPICE Process Software Unit Verification
Part 3 Automotive SPICE Process Software Unit Verification

How do you define the criteria for verification in Base Practice 2? With the strategic guidelines defined in Base Practice 1, you’re ready to proceed to the next step.  This BP applies to both static and dynamic tests. The result is expected to be specific test cases for the units and the definition of static checks at unit level. In this article we are covering Base Practices 2-7 

[This is Part 3 of a three-part series. You can find Part 1 and Part 2 here.] 

Base Practice 2: Develop Criteria for Unit Verification 

The ASPICE process expects that criteria are defined to ensure that the unit does what is described in both the software detailed design and the non-functional requirements.  

All Work products are expected to be produced as described in the Software Unit Verification Strategy.  

For example, the following criteria shall be defined for the static tests:  

  • Type of static measurements (e.g., measurement of cyclomatic complexity) and evaluation criteria for success (measured cyclomatic complexity is less than 50). 
  • Compliance with coding standards (e.g. MISRA) 
  • Compliance with design patterns agreed in the project 
  • Non-functional, technical criteria, such as resource consumption (RAM/ROM) 

You can set unit verification criteria generically for all units, or specifically for categories of units or individual units. In order not to let the effort get out of hand, it is recommended to be conservative with general definitions. 

Pro-Tip: Coverage goals (e.g. code coverage) are not usually suitable as unit verification criteria. They are best used as end-of-test criteria and thus determine when a test can be considered done.  

For each test specification, Base Practice 6 “Ensure Consistency” requires a content check between the test specification and the software detailed design. In most cases, this is done through quality assurance measures such as a review. The aim of this check is to prove that the test case correctly tests the content of  the linked requirements. It is explicitly expected that each review is documented.  

The BP2 assessment may be downrated if missing or insufficient non-functional requirements (SWE.1) or missing or insufficient software detailed design (SWE.3) are identified during the assessment.  

In other words, if the preceding processes are not complete, they will not get a good rating either. 

Base Practice 3: Perform static verification of software units 

Using the criteria defined in Base Practice 2, static verification of software units should be performed in Base Practice 3

The execution can take place by  

  • automatic static code analysis tools 
  • code reviews (e.g. with checks for compliance with coding standards and guidelines or the correct use of Design Patterns) 

The success criteria should be determined using the criteria from BP2. They specify whether the check is successful or failed. The basis can be coverage criteria or compliance with maximum value (max. cyclomatic complexity of Y) or minimum values (min. x lines of comments per lines of code). 

Base Practice 4: Test software units 

Using the test specifications created in Base Practice 2, software unit tests are to be performed in Base Practice 4. It is expected that the tests will be performed as described in the software unit verification strategy.  

For Base Practice 3 and Base Practice 4 it is explicitly expected that all tests including results are recorded and documented. In case of anomalies and findings, it is expected that these are documented, evaluated and reported.  

In addition, it is expected that all data are summarized in a meaningful way. In software unit verification, a lot of test data is generally expected. The test data should be prepared for both manual and automated execution verification results at multiple levels of detail. A solution for this is a meaningful summary e.g. by aggregation of all test results in form of a pie chart.  

Notes on the assessment for Base Practice 3 and Base Practice 4. 

Deviations in the execution of verification tests compared to the software unit verification strategy (BP1) lead to the devaluation of BP3 or BP4.  

For BP3 and BP4, lack of meaningful summaries leads to downgrading. If a test is only rated as passed/failed without additional information about the test, an assessor will not rate the affected Base Practice better than “Partly”. The stimulation and calculations of the unit presented in the reporting for automated software unit tests can be considered sufficient additional information to the assessment. 

An assessor will want to see an example for the assessment of BP3 and BP4, respectively. Specifically, they will want to use this to verify that a finding is handled consistently with the Software Unit Verification Strategy (see ASPICE article Part 1, item 9) and with SUP.9 Problem Resolution Management.  

Base Practice 5: Establish Bidirectional Traceability 

Bidirectional traceability is required in several places in Automotive SPICE. How you implement it is up to you. In this case, you are expected to link requirements from the Detailed Design with the results of test cases and static tests. And the test cases in turn are linked to requirements from the Detailed Design.  

In the simplest case, this can be done in a tabular form (columns = test cases; rows = requirements). This implementation is very maintenance intensive and error prone.  

Pro-Tip: Use tools such as TPT for this purpose in which links are created as easily as possible and a report is generated automatically at best. You can use this traceability report for consistency reviews (SWE.4 BP6) as an overview.  In case of change requests you can analyze dependencies to test cases faster.  

The assessor explicitly expects you to link test cases and requirements bidirectionally (BP5).  

Base Practice 7: Summarize and communicate results 

All unit verification results should be summarized and communicated to relevant parties. It is explicitly expected that there is evidence that the results have been reported. All types of communication media, such as letters, mails, videos, forum posts, etc. are accepted as evidence (as long as they are documented and thus traceable).  

If the SWE.4 BP 3 and/ or BP 4 is rated “None” or “Partly”, downgrading for BP7 by the assessor must also be expected. 

Identifying the relevant parties and their need for information is required in process ACQ.13 Project Requirements with BP7.  

The ACQ.13 Project Requirements process is not reviewed as part of an Automotive SPICE Assessment. It is, however, good practice that a project should not ignore processes just because they are not assessed.  

Summary 

Automotive SPICE demands many activities and outcomes for quality assurance. Many of the required results should also be checked in a verifiable way.  

Knowing and applying these assessment rules increases the likelihood of reaching a good assessment. Usually, a project reaches level 1 after 2 years and level 2 after another 2 years.  

Experience shows that success is achieved most quickly when the team is willing to learn and works continuously to meet the requirements. 

Don’t miss the our future deep dives into software testing topics: Follow Us on LinkedIn. 

Related topics

ISO26262 Testing with TPT testing tool
ISO26262 Testing with TPT testing tool. Read more  »
What is Model-in-the-Loop Testing? Introduction to MiL Testing
What is Model-in-the-Loop Testing? Introduction to MiL Testing. Read more  »
Software-in-Loop testing and how you do it
What is SiL Testing? Software-in-Loop testing and how you do it. Read more  »
Watch the tutorial. First Steps with TPT.
Watch the tutorial. First Steps with TPT. Watch now  »
Automatons in TPT. Test Design with Automatons
Automatons in TPT. Test Design with Automatons. Watch now  »
Simulink Model Testing with TPT using the TPT Toolbox
Simulink Model Testing with TPT using the TPT Toolbox. Watch now  »