Automotive engineers have already developed semi-autonomous vehicles. Fully self-driving vehicles are not far from reality. According to recent research, autonomous driving (AD) could create $300 billion to $400 billion in revenue by 2035.
The self-driving car not only showcases how advanced technology is, but it’s also a subject of controversy. There are valid concerns about safety, faulty tech, hacking, and the potential loss of driving jobs. Conversely, the opposite might be true. AD could lead to safer rides, greater convenience, and more productivity or free time. Instead of wasting hours in traffic, the “drivers” of the future could spend their commutes working, reading, or catching up on a TV series.
One critical component of self-driving vehicles is sensor technology — heterogeneous sensors, to be precise. The sensor data is trained using artificial intelligence (AI) and machine learning (ML) to observe and respond to the surroundings. Based on AI and ML algorithms, a vehicle uses the sensors to find the ideal route, decide where or where not to drive, detect nearby objects, pedestrians, or other vehicles to avoid collisions, and react to unexpected scenarios.
There have been two primary efforts in the development of self-driving cars.
1. Using cameras and computer vision for driving
2. Employing sensor fusion (i.e. using heterogeneous sensors to make the car see, listen, and sense its surroundings)
Most engineers have determined that AD can only be successful with vehicle cameras and computer vision. Instead, sensor fusion is the safest, most reliable choice.
There are four major sensor technologies used in self-driving vehicles:
- Cameras
- LIDAR
- Radar
- Sonar
Thanks to sensor fusion technology and rapidly improving AI, self-driving vehicles have begun to gain recognition as a real future possibility. It’s projected that by 2030, about 12% of vehicles registrations worldwide will be AD.
As far as lost driving jobs, similar concerns were raised when computers were introduced, and we know this technology has generated millions of jobs, globally. It’s likely that self-driving vehicles will also add to the necessity of skill-based jobs in the automotive industry.
Let’s explore the sensor technologies that are enabling autonomous driving.
The camera
Cameras are already used in vehicles for reverse parking, reverse driving, adaptive cruise control, and lane departure warnings.Self-driving vehicles use high-resolution color image cameras to obtain a 360-degree view of their surroundings. The images may be collected as multi-dimensional data, from different angles, and/or as video segments. Different image- and video-capturing methods are currently being tested, along with the use of AI technology. It’s necessary to ensure reliable on-the-road decision-making is possible for safe driving. These are resource-intensive tasks.
Such cameras do show potential, especially with advanced AI and ML. The high-resolution cameras can properly detect and recognize objects, sense the movement of other vehicles, determine a route, and visualize their 3D surroundings. They approximate human eyes, allowing a vehicle to drive similarly to one manned by a real person.
But there are drawbacks. For example, a camera’s visibility depends upon environmental conditions. As a camera is a passive sensor, it’s unreliable under low visibility conditions. Infrared cameras might be an option, but these images must be interpreted by AI and ML, which is still in the works.
Two types of camera sensors are used for AD: mono or stereo camera. The mono camera has a single lens and image sensor. It can only take two-dimensional images, which can recognize objects, people, and traffic signals. However, 2D images are not useful in determining the depth or distance of objects. To do so would require highly complex ML algorithms that have questionable results.
A stereo camera has two lenses and two image sensors. It takes two images simultaneously from different angles. After processing the images, the camera can determine the depth or distance of an object, making it the better choice for AD — except for the visibility issues in low light.
Some developers are combining mono cameras with distance-measuring techniques, such as LIDAR or radar, and sensor fusion to predict traffic conditions accurately.
Cameras certainly offer an important role in AD. However, they will require help.
LIDAR
LIDAR is one of the prominent technologies enabling self-driving vehicles. It’s an imaging technology used for geospatial sensing since the ’80s. Self-driving cars would typically have a rotating LIDAR sensor mounted on the roof.
Two types of LIDAR sensors can be used for AD. One is the mechanically rotating LIDAR system mounted on a vehicle’s roof. But these systems are typically costly and sensitive to vibrations. Solid-state LIDAR is another option that requires no rotation. They are the preferred choice for self-driving cars.
A LIDAR sensor is an active sensor. It works based on the time-of-flight principle, emitting thousands of infrared laser beams to its surroundings and detecting the reflected pulses using a photo-detector. The LIDAR system measures the time taken between the emission of the laser beam and its detection by the photo detector.
Based on the time spent between emission and detection, the distance is calculated as the laser beam travels with the speed of light. A three-dimensional point cloud is created based on the distance covered by different pulses. The reflected pulses are recorded as point clouds (i.e. a set of points in space representing a 3D object).
Such LIDAR systems are highly accurate and can detect extremely small objects. However, like visible-light cameras, LIDAR is unreliable in low-light visibility as the reflection of the laser pulses can be affected by weather conditions. Another drawback is the cost, which is in the thousands.
But LIDAR still holds promise for AD as new development are tried and tested.
Radar
Radar sensors are already used in many vehicles for adaptive cruise control, driver assistance, collision avoidance, and automatic braking. Typically, 77GHz radar for long-range detection or 24 GHz radar for short-range detection is used. The short-range radar (24 GHz) goes up to 30 meters. It’s cost-effective for collision avoidance and parking assistance. The long-range (77 GHz) goes up to 250 meters. It’s used for object detection, adaptive cruise control, and assisted braking.
Radar is excellent at detecting metal objects. It can be used with cameras to accurately monitor the movement of surrounding vehicles and detect potential obstructions.
Radar has limited capability for self-driving because it’s unable to classify objects. The radar data can detect objects but cannot recognize them. At best, low-resolution radar can support mono cameras and LIDAR or stereo cameras to deal with low-visibility situations.
Sonar
Sonar technologies are also being tested for AD. Passive sonar listens for sounds from surrounding objects and estimates an object’s distance from them. Active sonar emits sound waves and detects echoes to estimate the distance of nearby objects based on the time-of-flight principle.
Sonar can operate in low visibility, but it has more drawbacks than advantages for self-driving vehicles. The speed of sound limits the operation of sonar in real-time for safe AD. Also, sonar can give false positives. Lastly, it can detect large objects at short range but cannot recognize or classify them. Sonar is only useful for collision avoidance in unexpected conditions.
Inertial sensors
Inertial sensors, such as accelerometers and gyroscopes, are highly useful in enabling self-driving. The inertial sensors can be used to track the movement and orientation of a vehicle. They can be used to signal a vehicle to stabilize on rough roads or to take action to avoid a potential accident.
GPS
A self-correcting GPS is one vital requirement for self-driving. Using a satellite-based triangulation technique, GPS lets a vehicle precisely locate the car in three-dimensional space.
Sometimes GPS signals are unavailable or interfered with due to obstacles or spoofing. In such cases, self-driving vehicles must rely on a local cellular network and data from inertial sensors to accurately track the car’s position.
Conclusion
Self-driving vehicles typically use several heterogeneous sensors. One advantage of using many sensors is for backup — if one sensor fails, another one can compensate for it. A sensor-fusion technique will be necessary for fully autonomous vehicles using data from different sensors to determine the surroundings.
Currently, multiple approaches are being tested in the development of AD. One relies on stereo cameras to fully enable self-driving. Another approach uses mono cameras to provide a 360-degree vision, incorporating LIDAR or radar technology to sense distance. A third approach uses stereo cameras with radar sensors.
Cameras with sensors will likely be required for AD to effectively classify and recognize objects. The radar and LIDAR technologies can assist in using sensor fusion to render a weather-resistant self-driving solution. They can add to a 3D element, ensuring a better understanding of the driving environment.
The sonar or ultrasonic sensors will also offer a key role as they’re weather-resistant and reasonably cost-effective, providing an effective collision avoidance and emergency handling solution. The self-driving cars will ultimately rely on some combination of all these technologies.
You may also like:
Filed Under: Sensors, Tech Articles, Tutorials
Questions related to this article?
👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums.
Tell Us What You Think!!
You must be logged in to post a comment.