Dual radar technology can help self-driving cars recognize vehicles in heavy fog

Existing autonomous driving technologies mainly use LiDAR and radar to detect obstacles encountered on the road ahead, but these two systems are not good at identifying vehicles in a foggy environment. However, now engineers have found that the use of dual radar technology can accomplish the task very well.

Dual radar technology can help self-driving cars recognize vehicles in heavy fog

On November 17, 2020, electrical engineers at the University of California, San Diego developed an ingenious method to improve the imaging capabilities of existing radar sensors so that they can accurately predict the shape and size of objects in the scene. The system can work normally when tested at night and under foggy conditions.

Dual radar technology can help self-driving cars recognize vehicles in heavy fog

LiDAR (Light Detection and Ranging/Lidar) sensors measure the shape and distance of an object by emitting laser pulses, and then measure the time that the light reflects from the object to determine obstacles. Radar units emit radio waves, which are also reflected back by objects in their path.

Unfortunately, obstacles in the air such as fog, dust, rain or snow will absorb the light used by the LiDAR system, making it unreliable. Although the radar is not adversely affected, it can only generate part of the image it detects-this is because even under ideal conditions, only a small part of the emitted radio signal is reflected back to its sensor.

“This is a radar similar to LiDAR,” said Dinesh Bharadia, professor of electrical and computer engineering at the Jacobs School of Engineering at the University of California, San Diego. He pointed out that this is an inexpensive way to get bad weather perception on autonomous vehicles. “Using our technology can also integrate LiDAR and radar, but radar is very cheap. In this way, we don’t need to use expensive LiDAR.”

The system consists of two radar sensors placed on the hood, the distance between them is the average car width (1.5 meters). Arranging two radar sensors in this way is key-they allow the system to see more space and details than a single radar sensor.

During day and night test runs, the system is as good as LiDAR sensors in determining the size of a moving vehicle. In the test of simulating heavy fog weather, its performance has not changed. The team used a sprayer to “hide” another car, and their system accurately predicted its 3D geometry, and the LiDAR sensor basically failed the test.

1. Two eyes are better than one

The reason that radars have traditionally suffered from poor imaging quality is that when radio waves are emitted and bounce off objects, only a small part of the signal is reflected back to the sensor. As a result, vehicles, pedestrians and other objects appear sparse.

“This is a problem of imaging using a single radar. It only obtains a few points representing the scene, so the perception is very poor. Kshitiz Bansal, PhD in Computer Science and Engineering said: “There may be other cars in the environment that you can’t see. “Student of the University of San Diego.” Therefore, if a single radar causes such blindness, a multi-radar setup will improve perception by increasing the number of points that are reflected. “

The research team found that separating the two radar sensors 1.5 meters on the hood of the car is the best arrangement. Bansar said: “By having two radars at different vantage points, so that they have overlapping fields of view, we can create a high-resolution area with a high probability of detecting existing objects.”

2. The story of two radars

This system overcomes another problem of radar: noise. It is common to see random points that do not belong to any object appearing in the radar image. The sensor can also pick up the so-called echo signal, which is the reflection of radio waves, which does not come directly from the detected object.

Baradea pointed out that more radar means more noise. Therefore, the team developed a new algorithm that can fuse information from two different radar sensors and generate a new noise-free image. Another innovation of this work is that the team built the first data set that combines data from two radars.

Baradea said: “There is no public data set. These data come from multiple radars with overlapping fields of view.” “We collected our own data and built our own data set to train our algorithms and test them. “

The dataset consists of 54,000 real-time traffic conditions in day and night and radar frames that simulate foggy driving scenes. Future work will include collecting more data in the rain. To do this, the team first needed to build a better shield for its hardware.

The team is now working with Toyota to integrate the new radar technology with the camera. Researchers say this may replace LiDAR. “The radar alone cannot tell us the color, make or model of a car. These features are also important for improving the perception of self-driving cars.” Bharadia said.

The Links:   SKM50GB12T4 LCM-5541-32NTK

Author: Yoyokuo