Cloud native EDA tools & pre-optimized hardware platforms
Today we put all our cards on the table
But first, a quick word about us. Piketec has been around for 15 years. Since the beginning we have been developing our test tool TPT and we offer test services. We test automotive, software-based products (e.g. driver assistance functions, drive components, control software for charging and battery systems) on behalf of our customers.
What few people know: we offer our testing services mainly so we can continuously improve our test tool TPT. We want to constantly improve the UI and handling for our users.
The constructive user feedback we get from our customers reinforces our belief that this strategy is working.
Before we get to our hacks, let¡¯s take a look at the challenges in testing from the perspective of a service provider working for several automotive OEMs and suppliers.
We are fans of requirements-based testing. Here, we mean a requirement as an atomic description of a component, desribing one aspect if possible. Ideally (for testing), these requirements are stable over the entire development time.
At the same time we rarely have this state of stability of requirements in automotive projects. Requirements are often changed. And that is also good and right. The product is constantly improved and extended by these changes.
Requirements that were written at an early stage of development turn out to be insufficient over time and are adapted.
In testing, however, this also leads to some challenges. Tests already implemented against requirements have to be checked and adapted after a change. In this case we speak of maintenance. A change in the requirement or the code therefore always automatically generates additional effort in testing.
Testers find maintenance activities tedious and boring. They also want to work on new functionalities and test them. We have developed several tactics in our Testing Services that help to reduce maintenance efforts to a minimum. Also, to help testers have more fun testing.
Which tactics we implement specifically, is what we cover here.
Here is a brief overview of the topics we will touch upon:
1. Separation of test data for stimulation and definition of expected behavior
2. Development of tests in test models
3. Bidirectional linking of tests with requirements
4. Using methods to design robust tests
5. Automation
Now let¡¯s get into the first level of detail. Let¡¯s go.
The separation of test data (for stimulating a test object) is one of the basic principles for saving effort in all our testing services.
On the one hand, this separation of definitions promotes clarity and, on the other hand, it saves significant effort in the test case creation process. Expected values of test items are defined in so-called Assesslets and are thus reusable for all test cases and very easy to maintain.
Sounds good already? It gets even better.
The number of necessary test cases for a System under Test (SUT) can be reduced. Instead of laboriously analyzing and modeling one aspect of a requirement, signal waveforms or ramps in a test case can cover multiple requirements simultaneously.
The separation approach also allows immediate use of generated test cases, for example to increase test depth. This is possible without additional effort, since the missing information and expectations have already been defined in the Assesslet.
The separation saves considerable costs in the creation of test cases compared to conventional approaches. With the single-source-of-truth approach, test maintenance efforts can be further reduced. This is because test data can remain untouched in most cases and only the Assesslets need to be adapted.
Instead of writing test cases in the form of procedures, they can also be described through models.
What¡¯s the benefit?
With test models, you can separate repeatedly required test data into reusable references. The single-source-of-truth approach saves valuable effort right from the initial creation. The savings increase with more frequent changes. The review of such test models is easier due to the use and structuring of individual names.
Another pretty cool feature is the bi-directional linking of tests and requirements. With good reason: this feature is required in Automotive SPICE for all test activities.
With traceability, in each test run, all requirements of a SUT and all test cases including their results are clearly displayed in the reporting.
If tests are failing, the corresponding requirement can be easily found. For each requirement, it is always clear which tests have already been created.
And if requirements are changed during the lifecycle, linked test cases of a changed requirement are highlighted. Analyzing and implementing the change in testing saves time-consuming checks of test artifacts.
Some of our TPT Features increase the robustness of tests against changes and environment variants.
One example of a robustness feature is the reactive testing approach. In reactive testing, actions can be defined in the test design that are only executed when a SUT has assumed a defined state ¨C this can also be referred to as event-based testing.
A short example for understanding.
You, as a tester, want to test the ABS of a vehicle. To do this, you accelerate the vehicle to 30 km/h and then perform an emergency braking maneuver. When automating this test with our reactive test approach, you can abstract relevant environmental parameters in the test design, such as vehicle mass, coefficient of friction of the road surface and other influences. When the vehicle has reached its target speed of 30 km/h, then TPT detects this, and full braking is initiated.
The reactive testing approach can be applied to all types of technologies, test stages and test objects.
The advantage is that once tests are written, they can be reused for other variants without having to adapt the tests. This robustness in test design saves a lot of effort for test creation and maintenance.
Many tasks in testing are recurring activities, such as updating the test framework after a software change, updating requirements on a daily basis, and feeding back test results into an application life cycle management tool. These activities have to be performed manually by a tester, even in a test automation.
In TPT we have created several possibilities to automate such conventionally manual activities. TPT has an API that allows a TPT user to automate parts of their work with its own automation scripts. 85% of GUI features in TPT can be automated by users.
For us, testing is not just testing. We want to find bugs as easily, fast and efficiently as possible. From our point of view, it works much better if you have fun and enjoy your work.
That¡¯s why we built TPT and have been using it in numerous projects for more than 15 years. We are convinced that we have created one of the best test tools with numerous innovative functions and mechanisms. We are far from finished with the development of TPT. So we hope that these insights into how we work have made you much more curious about TPT.
Start with the best test automation in the automotive sector today.