T-Systems-Claim-Logo
Search
Curve of a multi-lane road at night, the lights of the cars form luminous lines, the cars are not recognizable

Edge Computing on Car Test Tracks

During prototype tests, huge amounts of data are generated daily, which engineers need access to as quickly as possible. 

July 08 2020Jörg Heizmann

Elaborate vehicle testing

Finland is a popular destination for trips to the Arctic Circle in winter, temperatures in Death Valley reach as high as 50 degrees Celsius in summer, and test vehicles from various manufacturers in the car industry race through western Namibia on gravel roads: prototypes and endurance test vehicles complete an average of 150,000 kilometers before the new car models go into series production. The overall vehicle system is now so complex that it is not possible to fully simulate all interactions. Although computers, digital twins, and hardware-in-the-loop simulations shorten the test times of new cars, test engineers cannot do without extensive test drives under a wide range of climatic conditions, on differing track surfaces, and with various regional operating differences. When it comes to how systems work together in a real vehicle, the focus is on worldwide test areas. 

Data loggers record large amounts of data

White car driving through ocher-colored, churning desert sand

Car tests in the automotive industry are expensive and complex. Depending on the company, the test run of a prototype can cost up to EUR 200,000 a year – not including the manufacturing costs for the vehicle itself – and despite computer simulation, the engineers still test all vehicles and components of a car in real operation. Test vehicles have long since become mobile measuring laboratories equipped with all kinds of measurement technology. This includes data loggers, among other things, which record all measurement data produced by sensors, actuators, bus systems, and other measuring devices during a car test. The modern sensors of some cars today record up to 10,000 channels. In addition to road signs and passers-by, this now includes the pupil movements of drivers themselves. The car is supposed to detect whether the driver is showing symptoms of fatigue and provide a warning. Vehicle data provide an indication as to whether anything can be optimized in the fine tuning of electric power steering, shock absorbers, or stability programs. 

Data transmission takes time and is expensive  

All this data ends up on hard disk storage devices, which “only” have to be read and fed into the evaluation software at the end of the working day in the vehicle hall. This is where the problems start. Today, a single test vehicle generates several terabytes of test data in one day. For comparison: it would take around 1,400 CDs to store 1 terabyte of data. But that is not the real problem. Sufficient storage capacity can be kept available – including through cloud computing. But engineers and data scientists are supposed to analyze this data as quickly as possible with big data software. To do this, the data have to be transported on hard drives from the North Cape, the desert, or the rainforest to the nearest data center – which can take days. Or they have to be transmitted – which takes time, even in fixed and LTE networks, depending on the data volume, and above all results in corresponding transmission costs.

White paper “Smart Engineering with Big Data and Digital Twins”

How are big data signal processing, automatic location determination, and digital twins changing the face of engineering?

Compressing data down to as little as 10 percent of the original volume

Close-up of a printed circuit board

This is where a new big data solution, which we developed at T-Systems, comes into play and combines several effects. First, we compress the data, second, we set up mini data centers at the test tracks, and third, we analyze the data on site. 

Data compression has not been easy until now. The test data are signal data whose proprietary data formats first had to be decoded and processed. With our big data signal processing software we now transcode this data into a big data format, which reduces the data volume by up to 90 percent. Without loss of information, of course. On the contrary: the data is enriched with metadata, which includes information such as vehicle data, as well as test track and weather conditions. Furthermore, we process the data in parallel by means of optimized big data storage, which accelerates data processing speeds up to 40 times. This makes it easier for engineers later on to identify and analyze data faster and to merge it with other measurement and metadata. 

Code to data instead of data to code

Having less data is an important first step towards faster data analysis. If there are several test vehicles on a test track, there is still a lot of data that needs to be transmitted. Therefore, we are also building edge big data clusters on the test tracks. These small data centers consist of a few racks and are connected to a central data center in a mobile manner. These edge computers are not only used to store the data of the test vehicles; they also conduct data compression and the analysis of the test data. To do this, the code of the business analytics software is transferred to the edge computing systems – in IT the principle is called “code to data”. The data is then analyzed on site and only the results are fed back to the development engineers. Even with lower transmission bandwidths, this is no problem.

Analysis costs and analysis time are reduced 

Big data signal processing and the edge cluster method bring enormous advantages for vehicle testing, where the much faster analysis of the test data at significantly lower costs is initially most important. Nonetheless, the entire test process benefits: car manufacturers need fewer test vehicles because they detect errors faster and test cycles need to be repeated less often. Measurement data is available after a few hours, or more precisely after the data-dump cycle during the shift change instead of after days, as in the past. The cost of data transmission is reduced, as the cost of connectivity is lower due to the smaller data volumes. Previously, engineers had to pre-order certain test data from the intelligent car tests, but now they can access all the data they need at any time, because in future, thanks to efficient data processing, full traces will also be available. This means that all test data from the bus systems are available at any time, even for ad hoc queries, thanks to a hybrid edge and cloud computing IT system. 

Large car manufacturers have already successfully tested the new solution. We are currently rolling out edge clusters at all test tracks for a particular manufacturer and then applying big data signal processing. But prototypes and camouflaged prototype cars will still exist. Otherwise camouflaged prototype car hunters would go extinct.

About the author
Jörg Heizmann, Senior Sales Manager Big Data & AI

Jörg Heizmann

Senior Sales Manager Big Data & AI, T-Systems International GmbH

Show profile and articles
Do you visit t-systems.com outside of Germany? Visit the local website for more information and offers for your country.