Banner PikeTec Sitemap

Glossary of Test Terms.

Acceptance test

In the acceptance test, it is checked if the developed system complies with the customer requirements. The customer decides whether to accept the system or not. The acceptance test is performed under normal operating conditions.

Advanced Driver Assistance System (ADAS)

Advanced driver assistance systems are electronic systems that interact with the driver to support him while driving. They enhance the safety and improve the driving comfort. Advanced driver assistance systems observe the current driving status of the vehicle and the vehicle’s environment, that is the road, traffic signs, other road users and obstacles. A wide variety of sensors are used for environment detection, such as ultrasonic sensors (parking sensors), camera sensors, radar and lidar sensors.

Driver assistance systems warn the driver and potentially intervene in the driving process and are therefore safety-relevant according to ISO26262; they are often classified with the highest safety integrity level ASIL-D.

Examples of current driver assistance systems are: adaptive cruise control, collision avoidance system, blind spot monitor, traffic jam assistant, traffic-sign recognition, automotive night vision, lane keeping assistant, lane change support, and parking assistant. The future of advanced driver assistance systems is autonomous driving.

Automation of Driving Tests

An automated driving test refers to the execution and evaluation of pre-defined test cases directly in the vehicle using a testing tool like TPT. Depending on the requirements of the driving test, either the TPT Autotester or the TPT Dashboard, both developed by PikeTec, can be used. 


The ASAM XiL API describes the communication between test automation tools and test beds in MiL, SiL and HiL. The X in XiL means that it is suitable for all “in-the-loop” environments.  More information under

TPT supports the ASAM XiL API.


ISO 26262 distinguishes – on a risk-based basis – four “Automotive Safety Integrity Levels (ASIL)” on a scale from A to D, with D classifying the highest safety level. The requirements for the development, operation … of a vehicle increase with each ASIL level. The ASIL classification of a specific system is based on a hazard and risk analysis.
In addition to requirements for software development and testing, the required measures in development also include proof of “Confidence in the use of software tools”.

Back-to-Back test

A Back-to-Back test describes the comparison of test results with the results of an earlier test run. The Back-to-Back test is used to ensure that the test results differ only slightly after changing the test phase for example from MiL to SiL, from SiL to PiL, from PiL to HiL. The ISO26262 demands explicitly to compare test results of different test phases.

Back-to-Back tests are alo used to ensure that several software components that share the same requirements have the same test results.

With TPT, fully automated Back-to-Back tests are easy to configure. The test results mainly differ marginally. Thus, a tolerance range around the reference signals significantly increases the robustness. Time tolerances and value tolerance are easy to set in TPT.


Black-box-testing is a dynamic test procedure to check the functionality of software regarding its specification. In black-box-testing, the internal structure of the test object is unknown or is deliberately not considered. Only the specification is considered in the test.

The class of black-box-tests includes functional tests, random tests, statistical tests, etc. In a narrower sense, the term black-box-test is often used as a synonym for functional tests. See also:

Boundary value tests

Boundary value tests are a subgroup of the equivalence class test. Boundary values are the smallest or largest values of an equivalence class or a data type. They are just within the range, on the limit, or just outside the range.

This approach is based on the experience that errors occur frequently at the boundary of value ranges.

Branch coverage

Test criterion that considers the number of program branches passed during a test in relation to the total number of branches included. The goal is to find out whether each option has been run through at least once for each branch in the program flow.

Branch coverage test

Test case determination on the basis of the program structure (structure test) with the aim of executing as many branches of the test object as possible at least once, to for example achieve a high branch coverage. In general, a certain branch coverage is required as a test exit criterion (for example 80% branch coverage).

The branch coverage contains the statement coverage and leads to a higher coverage of the test object compared to the statement coverage if branching occurs.

Class, property

Specific characteristic of a subset of the input data space for a test object according to a certain classification. For example, “red” is a class for the classification “color”.


Division of the input data space (or parts thereof) of a test object into classes according to test-relevant aspects.

Classification-tree method

Test method for systematic test case determination. The classification-tree method considers the input data space of a test object from various points of view that the tester recognizes as relevant. The resulting classifications consist of disjoint and complete classes. These classes can be refined by further classifications. The decomposition is noted as a tree and used as the header of a combination table in which test cases are marked. The classification- tree method belongs to the functional tests.

Code coverage

The code coverage is used in white-box-testing and is a measure of the extent to which programmed code has been executed. If several test cases are executed, the code coverage is measured in percentage and indicates what percentage of the code has already been executed and what percentage has not. The code coverage provides information about how far a test has reached into the depth of the code and whether the code was executable. If, for example, code has not been executed, it is not possible to say whether it would lead to the correct result. In this respect, the code coverage is used as a criterion to determine whether sufficient testing has taken place and is a so-called test exit criterion. To measure code coverage, the code must be instrumented. This means that counters are built into the code, which record the number of executions at its branches. There are different coverage dimensions such as Code coverage analyses are used to find those parts of code that are not executed during the test. Code coverage can thus reveal vulnerabilities in the code and in the tests, namely
  • untested code, that should be tested
  • dead, unreachable or superfluous code that should be removed
  • code that can only be achieved and tested with great effort

Condition coverage

The condition coverage  is a test criterion that considers the number of elementary conditions of the test object set to true or false in relation to twice the total number of elementary conditions (in percent).

In a Boolean evaluation for branching, the condition coverage takes each individual component that led to the Boolean result into account.

Note: A complete condition coverage (100%) may occur even without having passed all branches.

Example from programming language C: if (A && B) {case1} else {case2} is a function, A and B are elementary conditions. The condition coverage is 100% if A is true and B is false or B is true and A is false, although the branch case1 is never passed.

Confidence in the use of software tools

If a software tool is used in the automotive sector for the development of a safety-relevant system, ISO 26262 Part 8 from ASIL A onwards requires proof that this tool is “trustworthy” (qualification).

The qualification is carried out in two steps. First, the tool must be analyzed and classified. Here the tool impact, a tool error detection, is determined and the tool confidence level (TCL) is derived from this.
Then – if necessary – the necessary measures are derived and implemented (tool qualification).

Coverage analysis

Coverage analyses are used to assess whether sufficient testing has been performed or whether further test cases are necessary. The coverage analysis shows those locations for which no or too few tests are available. Various coverage methods are used, for example

  • Requirement coverage
  • Structural coverage of the code
  • Coverage of each safety mechanism
  • Coverage of each parameter configuration

Decision coverage

Decision coverage only takes into account whether a branch has been taken in the software, but leaves the how out of consideration. 

This means that if a branched condition has several components, only the result of the evaluation is taken into account, but not how this result came about.

Decision coverage = number of branches visited/total number of branches


Activity consisting of error localization, error analysis, and error correction. Debugging must be distinguished from testing, which is mainly used to detect errors

Dynamic testing

In dynamic testing, the software to be tested is executed and fed with specific input data to obtain specific output data. The test results are compared with the expected results to find out whether the program behaves as specified.

Dynamic test procedures are divided into structural and functional tests. So-called black-box and white-box tests play a role here.

In contrast to the dynamic test procedure, the software code is not executed in so-called static test procedures.

Electronic Control Unit (ECU)

ECU stands for Electronic Control Unit and refers to all electronic components in the vehicle. ECUs are embedded systems that read sensor signals and input data and calculate the output values from these data. The output values are either direct controls of actuators or signals to other ECUs, which are communicated via a CAN bus, for example. Examples for ECUs are engine control, transmission control, light control unit, ESP control unit, ABS control unit, or window lifter control.

Error, failure

An error is the difference between the observed, measured value and the expected value.

Error injection test

Error injection tests provoke external errors on the system under test in order to investigate to what extent the system behaves robustly against errors and error tolerance algorithms. These tests cover system states which are not reached during a normal operation of a software.

Equivalence class test

Equivalence class testing is based on the idea that many similar inputs of the test object produce the same behavior, while many other input values do not effect this behavior. This group of similar input values are quasi put into a drawer, which is then called equivalence class. It is assumed that only one or a few representative values (representatives) of an equivalence class need to be selected for the test but still obtain good functional coverage. These representatives of an equivalence class could be random values as well as values at the limits of the equivalence class, since particularly critical behavioral changes on the system under test can be expected here. In this case one speaks of boundary value tests.

In an automatic lighting system, for example, “bright”, “dark” and “too bright” would be equivalence classes for illuminance, which can take on integer values between zero and 100. While in the example “dark” would be from zero to 50 and “light” from 50 ( excluded) to 100, ” too bright” is defined for unacceptable values greater than 100. The system always behaves in the same way and switches the light off when the illuminance is “bright” and always switches it on when it is “dark”. Values greater than 100 cannot occur and are therefore forbidden. An equivalence class test would therefore test, for example, the values 17, 89 and 101. The representatives here are chosen randomly. For the boundary value test, for example, 0, 50, 51, 100, and 101 would be added.

In the equivalence class test, a test case is created for each equivalence class, thus limiting the amount of test cases.

The ISO26262 safety standard strongly recommends the use of the equivalence class method for ASIL-B, ASIL-C and ASIL-D, as this method can easily and systematically find a limited number of good input values for the test object.

Equivalence classes can be created in TPT and can then be used symbolically in the test modelling for equivalence clas tests and boundary value tests. Coverage tests for equivalence classes and automatic test case generation based on equivalence classes are possible.

Fixed-point arithmetic

Fixed-point arithmetic is used because some processors do not support floating point arithmetic, or because fixed-point arithmetic is faster. Fixed-point arithmetic means that a number always has the same number of digits.

When calculating with physical quantities, these usually have to be scaled to achieve the highest possible accuracy and to be able to perform the calculations without data overflows. The data type of the result must match the representation of the number.

Functional testing

Functional testing checks whether the function of the system under test behaves according to the specification. This black-box-test procedure checks whether the system is doing what it is supposed to do when it is executed.

Functional tests are a class of test methods for test case determination, in which test cases are derived from the test object specification, that is, internals of the test object are ignored during test case determination.

Function test, unit test

Test phase in which a function is tested in isolation. A function is the smallest testable software component. In the C programming language, for example, these are the export functions of a C source.

Hardware-in-the-Loop (HiL)

For testing embedded systems, Hardware-in-the-Loop (HiL) testing means that the finished (development) ECU is electrically connected to a simulation environment. The software is therefore on the ECU and quasi ready to be installed in the vehicle. Instead of being connected to the vehicle, the vehicle and its environment is simulated in real time, thus making the ECU believe that it is in its actual environment.

This simulation environment, also called HiL, has several advantages. Firstly, the vehicle does not have to be available for testing the control system and tests that could potentially endanger the driver and vehicle are also possible in the laboratory. These could be, for example, fault simulations, including electrical ones, or bus faults or dangerous driving situations that would mechanically destroy the vehicle.

The environment of the control unit, that is the controlled system, is usually simulated at the HiL. In some cases, the simulation is complex and can be better represented by using real components. For example, real throttle caps for testing engine control units are often found in a drawer, the so-called load drawer at the HiL, because the simulation of throttle caps is inaccurate and costly.

In all cases HiLs run in real time.

HiL tests can be automated, which is only possible with great effort in vehicle tests. However, simulation always represents a simplification of the real world and cannot replace real vehicle tests.

In the automotive industry, a distinction is made between different levels of integration in HiL testing. So-called component HiLs are used for individual ECUs or networks of a few ECUs, but integration or vehicle HiLs are also common, in which vehicle scaffolds with their cables and ECUs are set up to test the interaction of all ECUs.

For TPT test automation, there are various connections to HiL systems from different manufacturers such as dSPACE, ETAS, or Vector. ASAM also has a standard, the so-called XiL API, for remote control of HiLs from various manufacturers. TPT also supports the ASAM XiL API.

Integration testing, subsystem testing

Test phase in which the interaction of modules, subsystems and, if applicable, hardware/software (for embedded systems) is checked.

Interface test

Software usually consists of individual modules that communicate with each other, or has interfaces to users or the environment. These interfaces, via which software receives its input data or provides output data, are checked for completeness and correctness during the interface test. Data formats as well as their value ranges and boundary values play an important role and should be checked.

ISO 26262

ISO 26262 is the standard for the functional safety of electrical and electronic systems in road vehicles. ISO 26262 distinguishes between several “Automotive Safety Integrity Levels (ASIL)”, which are a measure of the safety relevance of a malfunction. There are a total of 4 levels divided on a scale from A to D, with D classifying the highest safety level. The ASIL classification is based on a hazard and risk analysis.

Linking of requirements and test cases, traceability

In order to evaluate the coverage of requirements and to ensure traceability of artifacts in the development process, requirements can be linked to test cases. Each requirement can be tested by several test cases and each test case can cover several requirements. The traceability of requirements is  for example required by ISO26262.

In TPT, requirements can be linked directly to test cases as well as to assesslets containing predicted results.

Model-in-the-Loop (MiL)

Model-in-the-Loop testing (MiL), also known as model testing, is the early testing of modules or module integration in a model-based development environment such as MATLAB Simulink from Mathworks, TargetLink from dSPACE, or ASCET from ETAS. The system under test (SUT) communicates with the environment via signals.

Basically, a model of the system under test (SUT) is simulated and tested. The term “in-the-loop” aims at the closed control loop to be tested, with MiL also being used for open control loops. Model-in-the-Loop testing is available with or without a dynamic model of the controlled system depending on the system under test (SUT).  In cases where the system does not require environment simulation, all input signals are generated synthetically by the test environment.

Model-in-the-Loop is the first test stage in model-based development.  In addition to the requirements-based test for models, coverage criteria such as branch coverage, decision coverage, condition coverage, and MC/DC play a major role as test exit criteria, similar to software testing.

In contrast to classical software, where program code is tested in the loop (so-called Software-in-the-Loop (SiL) test), the MiL test rather tests the block diagrams of the software.

In many cases, the test level following MiL is Software-in-the-Loop (SiL), since software is often generated directly from the models by means of automatic code generation, which in turn must be tested. Typical code generators for Simulink models are Simulink Coder and Embedded Coder from Mathworks or TargetLink from dSPACE. For MiL testing, the comparison (back-to-back test or regression test) plays a major role later on, because the test results of the model test (MiL) should be identical or at least very similar to the results of the software test (SiL). Differences may arise, for example, due to fix-point scaling or the use of other data types.

TPT offers MiL-testing with automatic test environment creation for Simulink, TargetLink, and ASCET. Functional test cases are usually created manually in TPT, whereas coverage test cases can be generated automatically with TASMO. Reactive closed-loop test cases are possible. For further information, see:

Modified Condition / Decision Coverage (MC/DC)

MC/DC resembles the condition coverage. In MC/DC, each individual atomic decision is independently tested once positive and once negative without changing the truth value of the other atomic decisions.

Safety standards such as ISO26262 refer to MC/DC as “highly recommended” for ASIL-D but MC/DC is also recommended for ASIL A and B and C.


Software is often composed of building blocks, so-called modules, which in themselves represent a certain functionality. The software architecture defines how the individual modules represent an overall functionality together.

Module testing

Test phase in which the functionality of a module (exported functions and their interaction) is tested.

Path coverage

Test criterion that considers the number of program paths passed during a test in comparison to the total number of possible paths (in percent).

A path coverage of 100% is usually not achievable, since the number of possible paths in loops is usually very large. For this reason, there are more specific path overlap criteria that are based on limits of the number of loop passes.

Note: Full path coverage (100%) includes full branch coverage and results in higher coverage of the test object compared to it.

Predicted result

This is the expected behavior of a test object. For example, a predicted result can be defined for an output parameter. Test oracles are often used to determine the predicted result.

In TPT, predicted results are largely defined in evaluation rules (assesslets). The evaluation rules are automatically applied to the test data.

Processor-in-the-Loop (PiL)

Processor-in-the-Loop (PiL) refers to the testing and validation of embedded software on the processor that will later be used in the ECU. The algorithms and functions are usually developed on a PC within a development environment either directly in C, C++ or model-based, for example with Simulink, TargetLink or ASCET. The resulting C code must be compiled with a special “target” compiler for the processor, which is later used in the ECU in the vehicle. PiL tests are performed to check whether the compiled code also works on the target processor.  The control algorithms for the PiL test are usually executed on a so-called evaluation board. Sometimes PiL tests are also executed on the real ECU. Both variants use the real processor used in the controller and not the PC as in the SiL test.  Using the target processor has the advantage that compiler errors can be detected.

“In-the-loop” in PiL tests means that the controller is embedded in the real hardware and the environment of the software under test is simulated. Environment models such as MiL, SiL, and HiL are rather unusual in PiL testing, since embedding such models on the target processor is complex or impossible. When environment models come together with the processor, one usually speaks of Hardware-in-the-Loop testing (HiL).

TPT enables PiL testing via so-called debuggers. TPT controls the debugger like Lauterbach TRACE32 or PLS UDE remotely to execute the compiled code directly on the processor.

Real-time system

A real-time system is a system that has to react within a finite and predictable time span. Real-time systems therefore not only have logical-algorithmic requirements, but also have to meet them within a certain time. Real-time capability does not necessarily mean the speed of a system, but rather its temporal determinacy and compliance with time limits.

The test tool TPT is real-time capable in the test execution with the TPT-VM. This means that TPT predictably meets time constraints on real-time platforms.

Regression testing

Repeated tests after changing the test object. For efficiency reasons, you can try to retest only those parts that are affected by the change.

Requirement-based test

In requirement-based testing, the requirements for a system under test are analyzed and test cases are derived from them. The tester analyses the requirements and considers how each individual requirement on the system can be tested.

Requirement-based test cases are often linked to requirements. The aim of linking is to track which requirements are covered by test cases or to ensure that each requirement is covered by at least one test case. A requirement is considered fulfilled when all test cases assigned to it have been successful. 

In TPT, requirements can be imported and linked to test cases. This makes immediately visible which requirements are being tested and for which requirements test cases still have to be created.

Requirement coverage

The requirement coverage shows how many requirements are covered in relation to existing requirements. The requirement coverage is a test exit criterion.

Software-in-the-Loop (SiL)

Software-in-the-Loop testing means testing embedded software, algorithms or entire control loops with or without environment model on a PC, thus without ECU hardware.  The source code for the embedded system is compiled for execution on the PC and then tested on the PC. The term “in-the-loop” means that parts of the software environment, that is the controlled system  or hardware, are simulated. The simulation of a closed control loop is not absolutely necessary, since some systems under test, especially in module testing, do not require closed control loops.

In the case of the module or unit test, the Software-in-the-Loop test is conducted in the first test stage for hand-coded software. In the case of so-called model-based development, Software-in-the-Loop testing is only conducted in second place, that is after Model-in-the-Loop testing.

Software-in-the-Loop testing is used for module, unit, and integration testing. The software integration test uses more complex SiL environments and co-simulation environments as well as hardware virtualization.

The code coverage with its coverage criteria (for example decision coverage, condition coverage and MC/DC) plays a major role as a test exit criterion in Software-in-the-Loop testing to decide when enough testing is done. For code coverage, the test case generation feature TASMO can be used in TPT, which automatically generates the test cases required for maximum code coverage. Unlike the structural test cases that are relevant for code coverage, the functional test cases are usually created or modeled manually.

For Software-in-the-Loop testing, the source code must be compiled in advance. Common software compilers such as Microsoft Visual Studio or MinGW are often used. If special functions are used in the software that are not supported by the compiler or the PC processor, these functions must be “stubbed“, that is replaced by dummy functions.

TPT offers several  solutions for Software-in-the-Loop testing:

  • MATLAB/Simulink SiL: In the case of automated code generation from Simulink models using Simulink Coder, Embedded Coder, or TargetLink, TPT automatically puts the Simulink model into SiL mode and simulates it in Simulink for test purposes.
  • ASCET: SiL testing of implementation models created with ASCET is supported by TPT
  • For handwritten C/C++ code, TPT offers the automatic creation of the test environment directls (C-Platform) or a co-simulation environment (FUSION). Both test environments are included in the standard scope of TPT.
  • AUTOSAR Software can be tested directly similar th C-Code or via FUSION
  • Other SiL environments such as VeOS from dSPACE (via ASAM XiL API), Silver from Synopsys, or RTLab are supported by TPT.

Statement coverage

The statement coverage is a test criterion that considers the number of program statements passed in a test in comparison to the total number of statements contained in the test (as a percentage).

Statement coverage = number of statements visited (executed) / total number of statements

Static test procedure

Static testing is testing without executing the code or program. This means that the uncompiled source code is reviewed or subjected to static analysis.

Structural test

In the structure test, the test cases are derived from the structure of the test object, for example to perform control flow-oriented tests (such as branch coverage tests) or data flow-oriented tests. The structure test is therefore also a white-box-test.


A stub is a placeholder for untested/not available modules. It is used in so-called top-down tests. Stubs do not fully realize the functionality of the replaced module, but show a simplified I/O behavior.


Abbreviation for “system under test”; see test object

System test

The system test tests the entire system under consideration.

Test case

In a test case, certain input situations are defined. This means that specific input values from the test object input data range are set. The tester selects the specific input values from the input data range set in such a way that they are representatives of the set in order to uncover errors of the entire set.

Note: The Institute of Electrical and Electronics Engineers (IEEE) defines “test case” differently. There a  test case also includes the test dates and the predicted result.

Test case determination

Test activity in which test cases are defined that are used to check the test objects. Test case determination is the most important test activity, since it has a decisive influence on the quality of the test.

Test criterion

Observable and generally also measurable, objective evaluation criteria for the quality of a test. Examples of test criteria are the testability of a test object but also the test duration or the test budget.

Test documentation

Test documentation includes the test plan, the test specification, the documentation of the test execution, and the documentation of the test evaluation. The test documentation serves the purpose of traceability and is mandatory for the development of safety-relevant embedded systems.

Test driver

I/O interface between test object and tester that can control the test object in isolation.

Test execution

The test execution is not limited to the mere execution of test cases, but also includes as preparatory activities the creation of the test rig and the provision of test data. The results are recorded. The comparison of the results with the target values is only made in the course of the test evaluation.

Test exit criterion

A complete test is very complex. For this reason, the test strategy must specify when a system has been sufficiently tested. The test exit criterion describes the time or state of a test case to be considered successfully completed. Test criteria are generally used to describe the test exit criterion.

Text exit criteria describe when a test is ended. Various criteria are used, for example

  • valued residual risks
  • coverage criteria  (branch coverage, path coverage etc.)
  • test duration
  • test budget
  • quality measures  such as the number of open errors or how many new errors were discovered within a specific period of time

Test frame

A test frame is the environment required for testing a test object. If it is necessary to simulate callers of the test object, a test driver must be generated to perform this task. The possibly necessary simulation of components that are called by the test object is realized by means of stubs.


During testing, the functions of a source code or executable software are checked in a specified environment. The source code or software to be tested is called the test object.

The results obtained are evaluated during testing. Any (random) execution of the test object under specified conditions for the purpose of verifying the observed results of the test object with regard to certain desired characteristics (predicted result, test oracle) is called a test.

The purpose of testing is to increase quality and to strengthen confidence in the function under test. Testing is always a random procedure. This means that testing can reveal errors, but not prove their absence. Test strategies are used to achieve good test coverage with as little effort as possible.

The term “testing” is sometimes also understood as “checking” and therefore also includes static test procedures.

Test management

Test management is the coordination of all activities of the entire test process. Test activities include

  • Test planning and test control
  • Test analysis
  • Test design
  • Test implementation
  • Test execution
  • End of test
  • Report
  • Completion of the test activities

Test method, test procedure

A distinction is made between static test procedures, in which the source code is not executed, and dynamic test procedures, in which the source code is executed. The selected test method thus influences the test case determination.

Test object, system under test (SUT)

A test object is a unit under test that is distinguished from its environment by a certain functionality. Due to the structured approach in the development of software-based systems, in which different functions are implemented in separate structures, test objects are therefore often – depending on the test phase – a single function (unit), a function group, a module, a subsystem or the entire system. The term “system under test” is used synonymously with test object.

Test oracle

To decide whether the test object behaves correctly or not, it is necessary to define in advance what is considered a correct and incorrect behavior. This requires special sources of information such as manuals, an existing system, or even a real person with special knowledge. The information source is called a test oracle.

Test organization

The test organization includes all activities related to the management of test objects and the associated data. This includes in particular the storage of test cases, test data, target values, actual values and technical parameters. The test organization serves the purpose of making tests reproducible at any time.

Test phase

Each software development phase is assigned a test phase in which the corresponding test methods are applied. According to the development phases, function test, module test, integration test, system test and acceptance test are applied.

Test plan

Test document describing the test exit criteria, test strategy, test methods, resources, responsibilities, effort and scheduling of the intended tests. The test objects are also specified.

Test process

The test process consists of the entirety of all test phases in the software life cycle.

Test rig

Unit necessary for the test run, consisting of test frame and test object.

Test run

Test run means the execution of a test object. Test data is generated during the execution. This data can be recorded and then evaluated.

Test specification, test description

Test document that contains the specific test strategy and details of the test method. Specifically, it must be explained how the test exit criteria are to be achieved. Furthermore, the test specification includes the description of the test preparation, the test cases, the acceptance and rejection criteria, and the regression test guidelines. In a narrower sense the term test specification is also used as a synonym for test case specification.

Test strategy

The test strategy is the result of selecting and defining the interplay of individual test methods (for example equivalence class test and branch coverage). The decision on how subcomponents are integrated into the overall system (top-down, bottom-up, etc.) can influence the test strategy.

Tool Confidence Level (TCL)

Within the framework of a tool qualification, the confidence level, “Tool Confidence Level”, is determined on the basis of the estimation for the Tool Impact and the Tool error Detection.

If a Tool Confidence Level TC1 has been determined, no further qualification measures are required. If TCL2 or TCL3 has been determined, a tool qualification must be performed.

Tool error Detection (TD)

 Tool Error Detection is according to ISO 26262 Part 8 the probability to detect or avoid a tool error in a defined application process

  • TD1=high probability
  • TD2=average probability
  • TD3=low probability of detection

Tool Impact (TI)

With the Tool Impact (TI) analysis, ISO 26262 Part 8 examines whether a software tool has a direct or indirect influence on the software to be developed. The Tool Impact can have the values 1 or 2. TI=2 means that a software tool has an impact on the tool application, with TI=1 there is no impact.

For example, a tool such as TPT itself cannot generate faulty code. Nevertheless, a malfunction of TPT could result in the failure to detect a safety requirement violation. Therefore, for TPT – as for any test tool – a tool impact must be assumed as given (TI 2).

Tool qualification

With a given Tool Confidence Level (TCL) > 1, a tool qualification must be performed for a software tool. According to ISO 26262 Part 8 there are four methods to choose from:

  1. “Increased Confidence from Use”,
  2. “Evaluation of the Development Process”,
  3. “Validation of the Software Tool”; and
  4. “Development in Compliance with a Safety Standard”

Which of the alternative measures are eligible for qualification is determined by the ASIL classification of the product to be developed. 1c “Validation of the Software Tool” is, for example, strongly recommended for ASIL C and D. The inputs, requirements and outputs (“work products”) for the respective method are described in detail in the standard.

Top-down test

Procedure for the integration test. You start with the modules at the highest design level, and then add the other modules to the hierarchy of module calls. The top-down approach requires the use of stubs.

TPT Autotester

The Autotester is a TPT feature developed by PikeTec that is used to examine and assess automatically the test signals emitted by the driver and the vehicle. It is worth mentioning that once a test case is set to be the active test case, other test cases can be run, examined and assessed automatically and in parallel during the driving test. Due to those “drive-by” tests, considerable testing time can be saved with the TPT Autotester.

TPT Dashboard

The TPT Dashboard is a TPT feature that is used for automated driving tests. Here, the test case can communicate with both driver and vehicle at the same time via a GUI. Specifically, the communication is made possible by means of control items, text and acoustic messages about the test performance. The TPT Dashboard is not to be confused with the TPT Autotester.


Validation checks whether the software fulfills its purpose, for example whether it meets the customer’s requirements.


Verification checks whether the software meets its requirements.


White-box-testing is a dynamic test procedure. White-box-testing uses the available information about the internal structure of the test object for control or data flow oriented tests. In this method, test cases are developed by examining the code of the software.

The white-box-testing procedure takes the code and its control flow into account. The coverage of the control flow is measured as for example statement coverage & branch coverage, decision/condition coverage. The coverage of the control flow is measured. Examples of degrees of coverage are, for example, statement coverage & branch coverage, decision/condition coverage or Modified Condition/Decision coverage. These measures are examples of the measures for the completeness of the white-box test. TPT supports coverage measurements for models and code with e.g. the Simulink Coverage Toolbox and CTC++. The coverage results are displayed in the TPT report.