Close

Implementation Case 1

Implementation Case 1

We are investigating the reduction of test execution time to a minimum by utilizing multiple parallel computing units. In this scenario, the user initiates the test run initially.

Description of the Use Case

The objective is: A tester should be able to start a test execution on multiple, parallel computing units at the push of a button. When the test execution is complete, there is a report that summarizes all test executions, measurements, and results as if the execution had occurred on a single computer.

To be able to implement this, the computer must be able to do the following:

  1. 1. Connection from the computer to the cloud via the internet.
  2. 2. Setting up instances.
  3. 3. Uploading files from the computer to instances (such as models, test scripts, etc.).
  4. 4. Initiating test execution with selected test cases per instance.
  5. 5. Downloading files from instances to the computer (such as test reports, etc.).
  6. 6. Shutting down instances.

Implementation Concept

The use of multiple parallel computing units is achieved through cloud computing, employing a Cloud-Native approach. In our case study, we opted for the globally leading Cloud provider Amazon Web Services (AWS), which held a market share of 32 percent at the end of 2022 (source). According to research, it is also widely utilized by many automotive OEMs.

The chosen computing units for test execution are moderate and comparable to the CPUs and RAM commonly used in development. Linux was selected as the operating system for cost reasons. The utilized tools are integrated into Amazon Machine Images (AMIs).

For upload and download, it was decided to use a straightforward Online Cloud Storage (S3), a service provided by AWS. This choice facilitates easy access for data exchange during uploads and downloads. The tools are integrated into Amazon Machine Images (AMIs).

Implementation Steps

1. Setup of the test execution environment

Programs like TPT and Matlab/Simulink require an operating system like Windows or Linux to be executable. Such an operating system is offered by cloud providers. At AWS, it is called Amazon Machine Image (AMI). An AMI is like the operating system of a virtual machine; it is pre-configured, and further installations or customizations are possible.

AWS provides some standard AMIs, which usually only contain the operating system. By adding the desired software, tools, and applications, individual AMIs can be derived from these standard AMIs. Once an AMI is defined, this configuration can be used one-to-one on multiple virtual machines.

In our first use case, we opted for a Linux operating system, as all the programs we used are also executable under Linux. Because licenses are required for the use of TPT and Matlab/Simulink, we hosted license servers outside of the cloud. The license server address can be specified directly in the AMI through a configuration file.

2. Instantiation of the test execution environment

An AMI is a software configuration and cannot be run without a computing unit. To bring it to life and use it, the AMI must be “instantiated” in a computing unit. And it needs one computing unit per instance.

How to choose the appropriate computing unit?
This is individual and primarily depends on the function and potential utilization by the AMI. There is a wide range of computing units to choose from, including options such as: number of processors, random access memory (RAM), instance storage (ROM), network bandwidth, and many other factors.

The cost model is based on the instance runtime, and it doesn’t matter whether the available resources are utilized or not. For our first case with many instances, it is therefore advisable not to overdo it.

Specifically, our choice fell on EC2 instances of Type T2.medium. A base AMI must be defined for the EC2 instance to be created. The setup, shutdown, and communication with the EC2 instances are done through SSM (AWS Systems Manager Session Manager): an interactive shell and command-line interface for EC2 instance management.

The setup, shutdown, and communication with EC2 instances is done through SSM (AWS Systems Manager Session Manager): an interactive shell and command-line interface for EC2 instance management.

3. Configuration for establishing communication between local PC and AWS

In order to start testing, additional settings are needed to allow secure and seamless communication between the local computer and the cloud.

Communication is configured, among other things, through Identity and Access Management (IAM), Virtual Private Clouds (VPCs), and security groups for inbound and outbound traffic. To safeguard the data/IP and the wallet, several security mechanisms are implemented for communication between the local computer and the cloud. It’s important to understand these mechanisms and then configure them appropriately for the specific use case, taking into account all elements of the application.

In our Case Number 1, there are several communication relationships:

1. Local computer registration in the cloud = Initial login for further communication
2. Communication from the computer to the S3 storage unit for upload and download
3. Communication from the computer to the SSM (e.g. starting three EC2 instances)
4. Communication from the computer to the EC2 instance (e.g. starting tests)
5. Communication from EC2 instance to the S3 storage unit (loading models, storing test reports)

If the computer successfully communicates with the cloud services, the IT configuration has succeeded, and the actual testing can begin.

Communication between computer and cloud via SSM to ECS to TPT (via Command Line)

Important to know!

The cloud is very secure.
The AWS default settings are very restrictive from a security perspective. Almost every communication node must be configured in AWS.
PRO TIP
An IT architecture diagram and a flowchart help to oversee the communication relationships.

Test Execution

1. On local computer

  • Load Matlab/Simulink into TPT
  • Create test cases
  • Save the test project locally

2. Initiate test execution in the cloud from the local computer

  • Decide how many instances to use
  • Start the automation script
  • Wait for the automation script to finish

3. On local computer

  • Review and evaluate test results
TIP for error prevention!
We have created an automation script that is intended to perform the following tasks:
  • Logging into the cloud
  • Uploading and downloading files
  • Setting up and tearing down instances
  • Dividing test cases from a project into multiple instances
  • Initiating test execution
  • Merging test data into a report

Summary for Use Case 1

Use case 1 could be fully implemented. Setting up the AMI and the other cloud computing resources was easy and fast thanks to good documentation from AWS.

Security activities

The biggest effort was in the security activities. Often we first had to understand which ports we needed to enable in order to allow communication between two entities.

For example, for file uploads from the local machine to the instances and for downloading the reports from the instance to the machine.

Communication relations

In numerous places, we also had to allow communication relationships between the elements used in AWS and the user. Once we had these chains of action in place and understood, we were able to implement an automation script in Python. Thanks to the very intuitive and comprehensive boto3 framework, this was done incredibly quickly and smoothly.

Risk local scripts

However, scripts executed on a local computer to control the process can also pose risks: If the connection between the local computer and the cloud is lost, unnecessary instances may continue to run and incur unnecessary costs.

Additionally, a script needs to be initiated by a user. This can lead to delays, can be inconvenient at times, and may also be error-prone.

To minimize such risks associated with local scripts, we expanded our first use case: We set up an additional instance in the cloud to initiate and monitor the test execution process. This is described in the second use case.