The most basic requirement of autonomous vehicles is the ability to detect and categorize objects. To be branded as autonomous, a vehicle must be able to assess its surroundings accurately before adjusting its to traffic, obstacles, as well as roadway regulations. The highly accurate sensors located inside ADAS (advanced Driver Assistance Systems) can easily save millions and millions of lives on roads. The ADAS system is constituted by several cameras, mapping, computing and lidar technologies. Here are some noteworthy details of all its components:
Fig .1: Picture Showing Autonomous Vehicle Platform
(Image Source: Intel)
● Cameras: A normal autonomous vehicle needs minimum 12 cameras in 360-degree configuration. Eight out of these twelve cameras support self-driving while the other four cameras support self-parking as well as self-driving. The cameras have highest resolution sensors and happens to be the only sensor that has ability to detect both texture and shape. Advanced vision capabilities and artificial intelligence can build a complete sensing state from cameras. It is important to have end-to-end capability for achieving “true redundancy” in combination with other types of sensors.
● Lidar: Intel based autonomous vehicle has total six sector lidars. Three of these are located on front end while the other three are located on rear end. Lidar works in combination with radar and gets used by system to a completely independent source for shape detection. It functions along with the shape detection. The camera centric approach of lidar needs to be used for some particular tasks, basically for road contouring and long-distance ranging. When workload of lidar is restricted it results into much lower costs as compared to lidar-centric systems, it offers much simpler manufacturing and volume at larger scale.
● Radar: Six radar modules present in this vehicle offer a 360-degree coverage cocoon near the vehicle. Radar is a very mature technology, it makes use of reflected radio waves for detecting objects and ascertaining their speed.
● Roadbook: It is considered as one of the highest definition maps that offer true redundancy to the camera system for texture based information like driving path geometry as well as several other static scene semantics.
Filed Under: News
Questions related to this article?
👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums.
Tell Us What You Think!!
You must be logged in to post a comment.