Cloud native EDA tools & pre-optimized hardware platforms
A well-considered testing approach with a focus on early testing stages means: fast test results with short iterations and daily feedback for development - immediately.
Why TPT for this? Because in TPT, test cases are defined independently of technology and execution method, enabling tests to be reused even in later testing stages such as or . Additionally, TPT supports .
Tedious test maintenance is a thing of the past: Expected values can be determined in separate definitions independent of test data, eliminating duplications.
During Model-in-the-Loop (MiL) Testing, software models are directly tested in the development environment, such as MATLAB/Simulink by MathWorks.
Supported modeling technologies include MATLAB/Simulink, TargetLink, and ASCET.
TPT seamlessly integrates with MATLAB/Simulink and supports generated code from TargetLink, which can be tested as . The same applies to ASCET.
Once created, test cases can be fully reused across all other testing stages. Back-to-Back Testing with TPT is particularly convenient and immediately identifies discrepancies between the model and its execution on the control unit.
Both generated and manually written code can be tested with TPT. Integration is often fully automatic. All versions of MinGW and Visual Studio compilers are supported, and tests can be debugged in both TPT and IDEs. TPT supports SiL Unit and Software Integration tests, automatically stubbing unresolved references. SiL test execution with TPT can take place on the host under Windows and Linux, in a , and in the .
Depending on the form of the test object, it can be connected to TPT in various ways:
Presented Source |
Integration with TPT |
Source Code |
via C-Platform |
AUTOSAR Software Components |
via AUTOSAR-Platform |
Library |
via C-Platform |
Object Code |
via C-Platform |
Executable Application |
via EXE-Platform |
Even if HEX and ELF files are provided as the source, they can be integrated and tested using Lauterbach Trace32, PLS UDE, winIDEA. Testing software compiled for the target is referred to as Processor-in-the-Loop Testing.
In cases where software modules require different testing environments, TPT can create a common test framework with a Multi-Technology Platform, enabling compatibility and interaction among them. This allows for co-simulations; for example, with TPT, Restbus simulations can be built at the SiL level.
For realistic simulations, integrating environment simulations such as Silver, CarMaker, and VTD is also possible. TPT automatically instruments the source code for code coverage measurements.
An important test exit criterion for Software-in-the-Loop tests is code coverage. Metrics such as Decision Coverage, Condition Coverage, Statement and Function Coverage, and MC/DC aid in determining when sufficient testing has been performed.
To increase code coverage or simply measure it for your created test cases, you can also use our automatic test case generation tool, TASMO, a popular feature of TPT.
During Processor-in-the-Loop (PiL) Testing, the embedded software is tested directly on the processor that will later be used in the control unit. The goal is to verify the compatibility of hardware and software components early on, such as drivers or actuator control.
With TPT, testing can be done either physically or even virtually in a simulation environment. Supported platforms include the Universal Debug Engine (UDE) from PLS, Trace32 from Lauterbach, and winIDEA from iSYSTEM.
Even if HEX and ELF files are provided as the source, they can be integrated using the Lauterbach, PLS UDE, and winIDEA platforms.
For execution in the simulation environment, you will need our Trace32 Support Package and a Trace32 license from Lauterbach. In Trace32, you can choose whether to use the simulation instead of a board. This even saves the purchase, setup, and maintenance of hardware.
Algorithms and functions for processors in embedded systems are typically developed on a PC within a development environment, either directly in C, C++, or model-based languages such as Simulink, TargetLink, ASCET, or ASCET-DEVELOPER models. The resulting C/C++ code must be compiled with a specific "target" compiler for the processor that will be used in the control unit of the vehicle.
To verify whether the compiled code also works on the target processor, PiL tests are conducted. The control algorithms for PiL testing are usually executed on an evaluation board, sometimes also on the actual control unit. In both variants, the real processor used in the control unit is employed, not the PC as in testing. Using the target processor has the advantage of detecting compiler errors.
"In-the-Loop" in PiL tests means that the controller is embedded in real hardware and simulates the environment of the software being tested. Environment models like , , and are uncommon in PiL tests because embedding such models on the target processor is complex or impossible. When environment models converge with the processor, it is usually referred to as Hardware-in-the-Loop Testing (HiL).
At this level, integration and system tests are often conducted, and they can be part of steps and .
Hardware-in-the-Loop (HiL) Testing involves connecting the finished control unit electrically to a simulation environment for testing.
HiL Tests on Control PCs
For this scenario, TPT test cases are directly modeled and executed on the control PC. While the tests are running, TPT communicates with the real-time simulator, allowing for the continuous alteration and observation of signals and parameters. Results can be recorded in real-time.
Also possible and easy to set up is communication with application tools such as INCA, CANape, fault simulators, or directly with the CAN bus. The TPT Dashboard also enables manual, interactive tests with TPT on the HiL.
PC-controlled HiL tests are supported by:
Real-time HiL Tests
Tests can be performed with TPT on HiL systems in real-time, with cycle times of less than 100?s. In this setup, tests run directly on the real-time system.
Real-time tests are supported by:
Vehicle-in-the-Loop (ViL) testing involves testing the components, control units, actuators, and sensors in the final target environment, ultimately representing vehicle testing.
Typically, vehicles are tested under various environmental conditions in cold, warm, and hot regions. Even today, these tests are mainly conducted manually. Manual tests only scale with trained drivers and available vehicles.
TPT's Autotester provides significant added value by offering a structured approach to testing vehicles. With the Autotester, you can describe manual driving maneuvers, guide a driver step-by-step through a test both audibly and visually, verify the correctness of execution, and fully automate all tests. The best part is that if the driver detects unusual behavior during the drive, they can make voice recordings with the press of a button, and the trace will be labeled accordingly at the time of recording. The driving data for this situation is then trimmed for simplified analysis on the computer.
Operational testing can be distinguished between these three approaches: Black-Box, Grey-Box, and White-Box testing. The differentiating factor is the information available to the tester for test creation and execution. For the embedded domain, the following characteristics apply.
In Black-Box Testing, the tester receives a test object as a black box along with a description of how the Black Box should behave, usually in the form of requirements. There is no information about the internal structure. The tester creates test cases, executes them, and compares the responses of their Black-Box with the expected values derived from the specification.
Test case creation in Grey-Box testing is essentially like Black-Box testing, but the tester has basic knowledge of the internal structure, for example, through descriptions of internal states the system can assume. However, the tester does not have direct access to the implementation or the code.
In White-Box testing, the tester has all the information, including insight into the codebase. In practice, this approach often leads to lower product quality than the Black-Box approach. From a quality assurance perspective, this approach should never be used as a reference for deriving expected values. However, there are meaningful exceptions, such as when White-Box testing is used to test defensive programming constructs, like sequentially repeating null pointer checks, which cannot be stimulated with Black-Box approaches.
It is unclear whether coverage reports available to the tester after execution are considered White Box or Black Box. This remains somewhat ambiguous. However, it is clear that coverage reports can help identify gaps in testing compared to requirements and/or gaps in requirements compared to the code.
Explore the Synopsys Support Community! Login is required.
Erase boundaries and connect with the global community.