<img height="1" width="1" src="https://www.facebook.com/tr?id=1599948400306155&amp;ev=PageView &amp;noscript=1">

Blog

The Key Enabler Technologies of Autonomous Driving

For years, market analysts have been reporting excitedly on autonomous driving technology and the disruptive change it's expected to bring in the automotive industry. With new players entering the mobility market and competing or partnering with traditional carmakers, self-driving technology has already begun to shape the industry.

But one often overlooked aspect of this evolutionary process is that self-driving mobility isn't a single piece of technology. In fact, there are a number of other technologies contributing to the development of Advanced Driver Assistance Systems and, ultimately, self-driving cars. These enabler technologies help pave the way to full autonomy in tomorrow's mobility solutions.

The Key Enabler Technologies of Autonomous Driving-2

Sensor technologies and vision processing

Sensors of all sorts have gotten significantly lighter, smaller, and cheaper in the recent past, enabling their use in a number of applications. In the automotive industry, developers have started combining different sensor technologies to enhance the situational awareness of driver-assisted and autonomous cars. In addition to self-driving cars being fitted with more refined versions of ultrasound and infrared sensors, there is currently a new generation of LiDAR sensors being developed and used.

Light Detection and Ranging technology is a laser-based method to measure the distances between the vehicle and its surroundings. With a rotating LiDAR sensor, it is possible to create a 3D surround view of the car which greatly supports obstacle detection. The latest development to help autonomous cars navigate safely through their environment is a new laser-based system that can actually "see round corners", enabling the vehicle to spot obstacles before they are seen.

Sensor fusion is an emerging (software) technology in autonomous driving. It aims to synthesize the data from multiple sensors to provide accurate position and orientation data for a self-driving car's central computer. It is therefore a key component of any Level 5 autonomous vehicle that operates with parallel data inputs from several types of sensors, all of which will need to be combined to refine the car's understanding of its environment.

Because of the high costs of LiDAR, some companies are working to innovate camera-based systems that can provide a car with an accurate perception of its surroundings using only traditional cameras at a fraction of the cost. The challenge here lies in vision processing, that is, interpreting the images transmitted by these cameras and extracting meaningful information so that the car can react to what it "sees".

Connected Car: 5G, LTE-V2X

As we move further away from direct situational awareness, another key component of self-driving cars will be their ability to communicate with others on the road. "Connected car" is the magic word here, encompassing any and all technologies that enable a vehicle to communicate via the Internet of Things.

The concept of vehicle-to-vehicle (V2V) communication is already developing further into V2X, or vehicle-to-everything. This includes V2I or vehicle to infrastructure, V2N (network), and V2P (pedestrians) communication, the combination of which is going to allow self-driving cars to exchange data with items and other players on the road to enhance safety.

While the fast development of sensor technologies is soon expected to give self-driving vehicles the ability to perceive their immediate surroundings, V2X communication extends this "field of vision". A connected autonomous car would automatically know the location and state of all other vehicles, pedestrians, traffic signs, and potential hazards in its way, giving the car more time to react to any dangers on the road. Therefore, V2X communication has the potential to make self-driving technology infinitely safer than human-operated cars – provided that it is fast enough. Since speed is of key importance, development has primarily focused on delivering the ability for immediate data exchange.

The first contender was DSRC (Dedicated short-range communications), a Wi-Fi based system that is generally considered mature and close to being ready for deployment in self-driving cars. But an alliance of top developers is now suggesting to use existing cellular standards (such as 4G or LTE) in V2X communication. The development of the faster 5G standard and the use of existing cellular infrastructure make this an option worthy of consideration, but developers need to weigh in the importance of not just data transfer speed, but also short response times if this technology is to be used to propel the development of self-driving vehicles.

Another key concern in the context of V2X communication is data security. As the multiple sensors, cameras, and computers of self-driving cars will generate vast amounts of data, the secure management and exchange of all this highly sensitive information becomes important. Developers and regulators alike are working to develop new standards to ensure the security of our mobility data in a self-driving future.

Contextual Artificial Intelligence

Tomorrow's vehicle will have ample information about both its immediate surroundings and its broader environment. In order to enable full autonomy, however, there is one more "tiny" challenge to overcome: the car actually has to interpret incoming data and make decisions based on that information. Artificial Intelligence is therefore a key component of self-driving technology.

While the application of AI started out with "simple" machine learning (performing functions by applying an algorithm to a provided dataset), it really gets interesting with deep learning. Deep learning applies artificial neural networks (layers of algorithms) to enable a system to learn and make its own decisions. The different set of inputs and decisions in a driving situation necessitates the use of these learning algorithms, and the future's self-driving cars will get better at image recognition, processing, and decision-making via deep learning.

By driving in various simulated (and, with time, real-life) traffic situations and applying deep learning, the autonomous car will get better and better at analyzing, understanding, and reacting to its context in traffic. Developers of contextually-aware self-driving cars claim that with the further development of AI, the future of mobility could be virtually free from accidents.

Integrated Application Lifecycle Management

With all those technologies contributing to self-driving vehicles, it's no wonder that their development is unbelievably complex. Managing parallel development streams and the integration of components is complicated enough, but developers of automotive technology also have to keep in mind current and future regulations on self-driving solutions.

Smart tools such as integrated ALM help manage the sophisticated development processes of autonomous cars while maintaining compliance with standards. These software platforms provide support for collaborative engineering processes and help maintain the transparency and traceability necessary for the delivery of safe and reliable self-driving technology.

Related white paper: Automotive Functional Safety & ISO 26262 Compliance

As the go-to Application Lifecycle Management tool in the mobility industry, codeBeamer ALM is used by OEMs and suppliers pursuing the development of autonomous solutions. LeddarTech chose this tool to overcome the complexity of ISO 26262 compliance and ASIL certification processes in the development of LiDAR technology. BMW uses codeBeamer ALM to scale Agile (LeSS) processes in autonomous development.

Download this recent Ovum report to find out more about BMW's use case and the benefits of applying ALM in regulated development. To give codeBeamer a try yourself, start a free trial!