Home > issue2 > Sensor Fusion for Advanced Driver Assistance Systems

Sensor Fusion for Advanced Driver Assistance Systems

Abstract

 

As vehicles move toward autonomous capability, there is a rising need for hardware-in-the loop (HIL) testing to validate and verify the functionality of advanced driver assistance systems (ADAS), which are anticipated to play a central role in autonomous driving. This white paper gives an overview of the ADAS HIL with sensor fusion concept, shares main takeaways from initial research efforts, and highlights key system-level elements used to implement the application.

1. What Is Sensor Fusion?

 
Today, many cars have multiple ADAS based on different sensors like radar, cameras, LIDAR, or ultrasound. Historically, each of these sensors perform a specific function and, only in rare cases, share information with each other. The amount of information that the driver receives is proportional to the number of sensors in use. If sensor data is sufficient and communication is in place, smart algorithms can be used to create an autonomous system.
 
Sensor fusion is the mixture of information from various sensors, which provides a clearer view of the surrounding environment. This is a necessary condition to move toward more reliable safety functionality and more effective autonomous driving systems.

Figure 1. A “View” of the Environment Surrounding a Car

2. When Is Sensor Fusion Used?

 
Sensor fusion can be relevant with all types of sensors. A typical example is the fusion of information provided by a front camera and a front radar. A camera that works in the visible spectrum has problems in several conditions like rain, dense fog, sun glare, and absence of light, but has high reliability when recognizing colors (for example, road markings). Radar, even at low resolution, is useful for detecting distance and is not sensitive to environmental conditions.
 
Typical ADAS functions that use the sensor fusion of front camera and radar include:

  • Adaptive Cruise Control (ACC) — This cruise control system for the vehicle adapts speed to traffic conditions. The speed is reduced when the distance from the vehicle ahead drops below the safety threshold. When the road is clear or the distance to the next vehicle is acceptable, the ACC accelerates the vehicle back to the set speed.
  • Autonomous Emergency Braking (AEB) — This controls the braking system by reducing the speed in the case of a certain collision or by otherwise alerting the driver in critical situations.

 

3. ADAS HIL Test Environment Suite (AHTES)

 
For the validation of complex systems, it is necessary to set up an appropriate test environment that effectively stimulates the sensors to verify the behavior of the vehicle under real-world conditions.
 
Altran Italia has integrated an innovative radar object simulator based on NI technologies and a 3D virtual road scenario simulator into an HIL setup to produce a scenario-based tester that fully synchronizes camera and radar data to validate sensor fusion algorithms.

Figure 2. ALTRAN-NI ADAS HIL Test Solution

The 3D scenario is based on the Unity 3D Graphic Engine, a cross platform game engine by Unity Technologies, and is fully configurable allowing customization of parameters such as number of lanes, lighting conditions, and track type.
A variety of other graphic model environments also exist on the market and could be used similarly such as IPG Carmaker and TASS PreScan.
 
The graphic engine reproduces the scene from the viewpoint of a camera placed on the windshield of a vehicle. The scene can be modified based on the height from the ground and the camera’s field of view. It is also possible to show an obstacle (for example, a vehicle) at a set distance from the camera at a defined speed.

Figure 3. Unity Graphic Engine Scenario

For the vehicle control, the graphic engine receives the position of the brake pedals and throttle in addition to the steering angle. A PXI System acquires this data in addition to signals from the steering wheel and pedals (Logitech G29). The dynamic vehicle model is integrated within the graphics engine and is highly configurable.

Figure 4. Standard Maneuvers
 
Per the selected obstacle scenario (with some examples above as a starting point), the graphic engine outputs the vehicle speed as well as information needed by the VRTS to produce an RF signal. All input/output information is exchanged with the PXI through a proprietary protocol and can be changed as needed.

A CAN communication with PXI-8512/2 was used in this setup to retrieve information from the scenario generator about radar targets (distance, radar cross sections, angle of arrival, and speed). The PXI-8512/2 is a two-port high-speed CAN/CAN-FD interface for PXI Systems used for transmitting and receiving CAN bus frames at 1 Mbit/s. The information is sent to the Obstacle Generator only if the information about the targets changes between consecutive readings.
 
In addition to sending the data to the Obstacle Simulator and acquiring pedal and steering signals, PXI also emulates the CAN messages sent to and from the radar and camera over a private vehicle network.
 
CAN messages are synchronized with the 3D virtual scenario and RF Target Generator to produce the correct environment for validating modern camera and radar data.

Leave a Comment:

Your email address will not be published. Required fields are marked *