At the 2017 Consumer Electronics Show (CES), Audi announced that it will be working with tech giant Nvidia to bring cars with artificial intelligence (AI) and fully autonomous driving to the road by 2020. The two companies see the technology as key to the development of autonomous vehicles.

Ingolstadt brought along the Q7 deep learning concept to the show in conjunction with Nvidia’s keynote address, in order to showcase the first fruits of their labour. A front-mounted two-megapixel camera enables the car to orient itself, and communicates with an Nvidia Drive PX 2 processing unit specifically designed for autonomous driving applications – the latter controls the steering with a high level of precision.

At the core of the software are deep neural networks specifically trained for autonomous driving and recognition of dynamic traffic control signals. The car initially gained limited familiarity with the route and surroundings with a human driver behind the wheel, through observation and the addition of training cameras – this established a correlation between the driver’s reactions and what the cameras themselves observed.

During subsequent demonstration drives, the Q7 is able to understand instructions such as a temporary traffic signal, and interpret and act upon them as the situation requires. A corresponding signal would cause the car to immediately change its driving strategy and choose either a short route or a long one. The robust design enables it to function under changing weather conditions, day or night – even under direct sunlight.

These learning methods are similar to deep reinforcement learning, and was the underlying principle behind the 1:8-scale Q2 deep learning concept showcased in Barcelona last month. That car learned how to park through trial and error, but the Q7 learns from the driver instead, receiving concrete data that it finds relevant.

Audi has also confirmed that the next A8 will debut this year, and that it will be the first series production car to feature Level 3 autonomous driving. Thanks to the traffic jam pilot system, the car will be able to take full control in certain situations. Central to this is the world’s first central driver assistance controller (zFAS), with a Mobileye image processor that implements deep learning.

Next-generation Audi A8 will be capable of fully autonomous driving in certain situations

As such, manual training methods during development have been significantly reduced, with deep neural networks enabling the system to learn by itself which characteristics are appropriate and relevant for identifying various objects. The car will even recognise empty driving spaces, which Audi says is an important prerequisite for safe autonomous driving.

The A8 will also come with the next-generation MIB2+ (Modular Infotainment Platform) powered by an Nvidia Tegra K1 processor. This enables the implementation of new functions as well as the support of several high-resolution displays – including the second-generation Audi virtual cockpit. The new car will also merge online and onboard information as part of the move towards cloud computing.

Audi has been working with Nvidia since 2005, putting the latter’s chip into production in the A4 in 2007; Nvidia’s technology was then used to power the A8’s visual displays two years later. The company’s first MIB system, introduced in 2013, used a Tegra 2 processor, while the MIB2 system used in the Q7 from 2015 runs an T 30 processor.


SPYSHOTS: 2017 Audi A8