91³Ô¹ÏÍø

Preparing for Automotive SPICE Assessment: A Guide to Software Unit Verification

Robert Fey

Nov 15, 2022 / 10 min read

Did the last Automotive SPICE Assessment crash and you don¡¯t know why? Or is your first assessment coming up?  

This article series is about the preparation of an assessment for the Automotive SPICE process Software Unit Verification (SWE.4). We go into the process, the expected deliveries and the view of assessors. Always keeping the idea in mind: What to do to get through an assessment successfully?  

Spice Up Your Tests

An Automotive SPICE Assessment is always successful if all participants from the project: 

  • have a good knowledge of the Automotive SPICE process maturity model, 
  • can correctly answer questions from the assessor and  
  • can explain a relation of their activities to the reference process. 

For every process, Automotive Spice version 3.1 requires two basic types of deliverables: 

  • work products  
  • process outcomes

The concrete outcomes should be known by all participants for a successful assessment, because an assessor will pay particular attention to them when assessing the process.  

Good to know: The ¡°Software Unit Verification (SWE.4)¡± process is often equated only with the dynamic testing of software units. Although this is an essential component, much more is expected here.  

General Goal of an Assessment

An Automotive SPICE Assessment is intended to determine the maturity level of an organization. The maturity level is seen as an indicator for high quality. The assessment itself is performed for each process using generic base practice descriptions derived from a reference process model. For a rating of Level 1: ¡°performed process¡±, at least 50% (largely) of all required achievements must be achieved. 

Pro Tip: The Software Unit Verification process has 7 base practices (see figure). You should consider all base practices. Please do not ignore any of them. The qualitative requirements for the Level 1 assessment are very high and the results of the base practices have cross-dependencies to upstream and downstream processes. Poor performance can result in downgrades in other areas. 

Brief Overview of the 7 Base Practices:
7 Base Practices Overview

How to Define a Software Unit Verification Strategy?

The software verification strategy is the basis for all activities in the software unit verification process and is therefore also the basis in an assessment. The software verification strategy is required by Base Practice 1: Develop Software Unit Verification Strategy including Regression Strategy. 

For an assessor a unit verification strategy must include at least the following 10 aspects: 

1. Definition of all units. The definition can be generic or specific. Make sure that units are uniquely identifiable. In the simplest case, there is a list of functions or files that are classified as units.  

  • You should be able to answer the following question: how do you ensure that all units are included in the list of functions? This can be done, for example, by periodically checking the list or through automated updating of the list. 

2. Definition of how specific requirements related to verification and testing are covered. This means functional, non-functional and process requirements.  

  • You should have an overview of what requirements there are for the entire project. Supplement this with the information that has an impact on unit verification. These are generally also requirements from Automotive SPICE, ISO26262 or other safety standards, cross-sectional load booklets, laws, from stakeholders, MISRA, etc.). It can be helpful if you explicitly include individual requirements in the verification strategy and briefly document your solution for implementation.   

3. Definition of methods for the development of test cases and test data derived from the detailed design and non-functional requirements. 

  • The requirements should explain which methods you use for this, e.g. forming equivalence classes for all interfaces, positive & negative tests, etc.). 
  • If you have generic unit definitions, you will probably use generic definitions for this as well. If you have constraints/variants for example for QM and functional safety units, the expectation is that they can also show an overview of QM and functional safety units. This expectation applies analogously to all other variants. A generic unit definition can thus increase the test effort. 
  • To deal with this aspect, we recommend a prior analysis of all requirements and a derivation of the most suitable methods based on this analysis. 

4. Definition of methods for the methods and tools for static verification and reviews. 

5. Definition for each test environment and for each test methodology used.  

  • Off-the-shelf tools implement methodologies. Refer to existing tool vendor documentation to save time.  
  • Use tools that master as many methods and technologies as possible. Save project costs for training and licenses. With a few tools that can be widely used, employees can be re-prioritized more quickly and familiarization with tooling is no longer necessary.   
  • Use established methods, such as equivalence class or limit tests for test data collection. 
  • Use tools that relieve you of the maximum amount of work for recurring activities, e.g. by automatically generating reports and traceability. 
  • Automate as much as possible.  

6. Definition of the test coverage depending on the project and release phase.  

  • Nobody expects you to reach 100% coverage on day 1. Use the duration of the project and show achievable build curves. 
  • Derive what you need for this in terms of personnel or other resources.  
  • Review your strategy and adjust it if there are deviations. Make changes according to the process (SUP.10 Change Request Management). 

7. Definition of the test start conditions and test end criteria for dynamic unit tests.  

  • Which conditions lead to the start of which activities.  
  • Are there dependent sequences? 
  • When do they terminate, when do they restart? How do they get this? 
  • When do they stop testing? It is best not to use temporal, but technical or measurable criteria (coverage metrics, how all requirements are tested). Argue why these metrics are sufficient.  

8. Documentation of sufficient test coverage of each test level, if the test levels are combined. 

  • If you combine test levels, you must justify how you determine the level of coverage. Coverage can mean code coverage, interface coverage, and requirements coverage. A coherent rationale would be, for example, that you move test content to higher levels because you can assign test cases and requirements more meaningfully at this level. 
  • They often get coverage targets from standards and other guidelines. ISO 26262 sets targets for code coverage of safety-related code portions. ISO 26262 implicitly requires high coverage with the following note: ¡°No target value or a low target value for structural coverage without justification is considered insufficient.¡±  
  • In general, it is best to substantiate all coverage target values below 100%. This can most easily be done using release schedules and predetermined prioritizations of requirements or features. 
  • Pro-tip: Reference or link relevant requirements from the source to the appropriate section in the software unit verification strategy. 

9. Procedure for dealing with failed test cases, failed static checks, and check results. 

  • This procedure should relate to the ASPICE Problem Resolution Management Strategy (SUP.9) process and be consistent. 
  • You should describe who is informed, as well as how and when to do what.  
  • You should also describe what information/data you will share in the process. 

10. Definition for performing regression testing. 

  • Regression testing refers to the re-execution of static and dynamic tests after changes have been made to a unit. The goal is to determine whether unchanged portions of a unit continue to work.  
  • In automated testing, a regression test is done at the push of a button.  
  • In Continuous Integration / Continuous Testing environments, it is sufficient to indicate that regression testing is ensured by ¡°nightly builds¡± or other automatisms.  

Notes on the assessment. 

If you do not cover all 10 aspects mentioned above in a Software Unit Verification Strategy, you must expect not to receive the assessment ¡°Fully¡± for BP1 ¡°Develop Software Unit Verification Strategy including regression Strategy¡±. Not fulfilling points 2 till 4 will result in them being rated Partly or worse for BP1.  

Implicitly, the assessor also expects that all personnel involved in the process have knowledge of the contents of the Software Unit Verification Strategy. If they do not have evidence, e.g. in the form of mails, logs or similar, it may happen that a tester is called into the assessment and their knowledge is determined in an interview.  

In Automotive SPICE, the higher-level Work Product Verification Strategy (WP ID 19-10) is characterized in more detail. It requires scheduling of activities, handling of risks and constraints, degree of independence in verification and other aspects for a verification strategy. 

Define Your Verification Criteria Correctly to Pass Your Next Automotive SPICE Process Software Unit Verification

How do you define the criteria for verification in Base Practice 2? With the strategic guidelines defined in Base Practice 1, you¡¯re ready to proceed to the next step.  This BP applies to both static and dynamic tests. The result is expected to be specific test cases for the units and the definition of static checks at unit level. 

Base Practice 2: Develop Criteria for Unit Verification

The ASPICE process expects that criteria are defined to ensure that the unit does what is described in both the software detailed design and the non-functional requirements.  

All Work products are expected to be produced as described in the Software Unit Verification Strategy.  

For example, the following criteria shall be defined for the static tests:  

  • Type of static measurements (e.g., measurement of cyclomatic complexity) and evaluation criteria for success (measured cyclomatic complexity is less than 50). 
  • Compliance with coding standards (e.g. MISRA) 
  • Compliance with design patterns agreed in the project 
  • Non-functional, technical criteria, such as resource consumption (RAM/ROM) 

You can set unit verification criteria generically for all units, or specifically for categories of units or individual units. In order not to let the effort get out of hand, it is recommended to be conservative with general definitions. 

Pro-Tip: Coverage goals (e.g. code coverage) are not usually suitable as unit verification criteria. They are best used as end-of-test criteria and thus determine when a test can be considered done.  

For each test specification, Base Practice 6 ¡°Ensure Consistency¡± requires a content check between the test specification and the software detailed design. In most cases, this is done through quality assurance measures such as a review. The aim of this check is to prove that the test case correctly tests the content of  the linked requirements. It is explicitly expected that each review is documented.  

The BP2 assessment may be downrated if missing or insufficient non-functional requirements (SWE.1) or missing or insufficient software detailed design (SWE.3) are identified during the assessment.  

In other words, if the preceding processes are not complete, they will not get a good rating either. 

Base Practice 3: Perform static verification of software units

Using the criteria defined in Base Practice 2, static verification of software units should be performed in Base Practice 3

The execution can take place by  

  • automatic static code analysis tools 
  • code reviews (e.g. with checks for compliance with coding standards and guidelines or the correct use of Design Patterns) 

The success criteria should be determined using the criteria from BP2. They specify whether the check is successful or failed. The basis can be coverage criteria or compliance with maximum value (max. cyclomatic complexity of Y) or minimum values (min. x lines of comments per lines of code). 

Base Practice 4: Test software units

Using the test specifications created in Base Practice 2, software unit tests are to be performed in Base Practice 4. It is expected that the tests will be performed as described in the software unit verification strategy.  

For Base Practice 3 and Base Practice 4 it is explicitly expected that all tests including results are recorded and documented. In case of anomalies and findings, it is expected that these are documented, evaluated and reported.  

In addition, it is expected that all data are summarized in a meaningful way. In software unit verification, a lot of test data is generally expected. The test data should be prepared for both manual and automated execution verification results at multiple levels of detail. A solution for this is a meaningful summary e.g. by aggregation of all test results in form of a pie chart.  

Notes on the assessment for Base Practice 3 and Base Practice 4.

Deviations in the execution of verification tests compared to the software unit verification strategy (BP1) lead to the devaluation of BP3 or BP4.  

For BP3 and BP4, lack of meaningful summaries leads to downgrading. If a test is only rated as passed/failed without additional information about the test, an assessor will not rate the affected Base Practice better than ¡°Partly¡±. The stimulation and calculations of the unit presented in the reporting for automated software unit tests can be considered sufficient additional information to the assessment. 

An assessor will want to see an example for the assessment of BP3 and BP4, respectively. Specifically, they will want to use this to verify that a finding is handled consistently with the Software Unit Verification Strategy and with SUP.9 Problem Resolution Management.  

Base Practice 5: Establish Bidirectional Traceability

Bidirectional traceability is required in several places in Automotive SPICE. How you implement it is up to you. In this case, you are expected to link requirements from the Detailed Design with the results of test cases and static tests. And the test cases in turn are linked to requirements from the Detailed Design.  

In the simplest case, this can be done in a tabular form (columns = test cases; rows = requirements). This implementation is very maintenance intensive and error prone.  

Pro-Tip: Use tools such as TPT for this purpose in which links are created as easily as possible and a report is generated automatically at best. You can use this traceability report for consistency reviews (SWE.4 BP6) as an overview.  In case of change requests you can analyze dependencies to test cases faster.  

The assessor explicitly expects you to link test cases and requirements bidirectionally (BP5).  

Base Practice 7: Summarize and communicate results

All unit verification results should be summarized and communicated to relevant parties. It is explicitly expected that there is evidence that the results have been reported. All types of communication media, such as letters, mails, videos, forum posts, etc. are accepted as evidence (as long as they are documented and thus traceable).  

If the SWE.4 BP 3 and/ or BP 4 is rated ¡°None¡± or ¡°Partly¡±, downgrading for BP7 by the assessor must also be expected. 

Identifying the relevant parties and their need for information is required in process ACQ.13 Project Requirements with BP7.  

The ACQ.13 Project Requirements process is not reviewed as part of an Automotive SPICE Assessment. It is, however, good practice that a project should not ignore processes just because they are not assessed.  

Summary

Automotive SPICE demands many activities and outcomes for quality assurance. Many of the required results should also be checked in a verifiable way.  

Knowing and applying these assessment rules increases the likelihood of reaching a good assessment. Usually, a project reaches level 1 after 2 years and level 2 after another 2 years.  

Experience shows that success is achieved most quickly when the team is willing to learn and works continuously to meet the requirements. 

Continue Reading