It sounds easy, but it isn’t: The sensors capture a heap of data during travel and have to keep all objects nearby – at least in their field of view – in sight. This means that the car must be able to identify nearby objects, as well as knowing real distances and motion vectors in order to be able to assess whether there will be a conflict with the object. Object identification can be trained before travel. During travel, processed radar or lidar data, for example, then comes together with this scene recognition.
The demands placed on such a detection system can, of course, vary. In the Australian Outback with its small volumes of traffic, the key is identifying dingos, kangaroos, or crocodiles crossing the road. In a German or French town, other motorists play a much more significant role. Not to mention cities like London, Paris, Frankfurt – or my favorite example, Stuttgart. What’s more, it’s apparently the German city states of Berlin, Hamburg, and Bremen that are the “leaders” in terms of accident density. Around one in six cars has an accident there each year.
It’s relatively easy to evaluate longitudinal traffic – that’s anything moving in parallel to the direction I’m traveling in. What is more difficult is evaluating traffic moving across my path, or curving movements. With that being said, reliable sensors and situation evaluation are an essential factor in autonomous driving. Automotive manufacturers are working hard on improving their cars’ vision. The key is artificial intelligence. It can identify “scenes” and assign captured image elements to objects during travel. Training object recognition requires not only heaps of real data, but also powerful computing capacity.