Team Pegasus has created world’s first autonomous vehicle, featured with high-tech capabilities like self-parking and automatic braking. Originally a school project, this vehicle utilizes machine vision algorithm and data from its on-board sensors to follow street lanes, clear obstacles in its way and perform parking maneuvers, which offers user friendliness and scalability. The only difference between cars on road and the autonomous car is the size. It is much smaller than an average car and can only be used as model.
Equipment used in the project
The students of the DIT168 course were asked to create self-driving autonomous vehicles with certain features like following street lanes, ability to overtake obstacles and perform parking maneuvers. They used an RC car for this project, a single board Linux computer, a web camera, sensors and microcontrollers. Further, they used OpenDaVinci middleware for both testing and deployment platform, which they studied in their syllabus.
Working of the vehicle
The surprising part of this project, is the use of Android phone instead of specialized devices for image processing and decision making which is responsible for wirelessly transmitting instructions to an Arduino via Bluetooth, which is embedded inside its chassis. This on other hand controls the physical aspects of the vehicle. This board is connected to three ultrasonic distance sensors in the front and to its rear. Three more IR sensors are linked to the Arduino and a speed encoder to one wheel.
The team has also created an Android app called CARduino that communicates via Bluetooth with the on-board MCU. This is for driving the motors and analyzing the sensor data. Furthermore, the hardware components like sensors and motors are handled in a way which is more program based and uses a custom Arduino library. This provide a scope to developers to easily complete their task and be less hectic about lower level implementation details.
Apart from that, a 9-DOF Razor IMU board is attached to the front bumper. It provides feedback on the car’s movement but due to magnetic interference from the motors, it is not very reliable. The vehicle has also LEDs as head and brake lights along with a driver board that receives signals over serial and blinks the lights.
It has an electronic speed controller which is powered by a 7.2V battery. They are tasked with driving the motors according to a PWM signal that it receives from the Arduino. Further, the servo motor determines the angle of the vehicle’s front wheels.
Drawbacks of earlier projects
The look of the previous end products were not satisfactory as they were characterized by a tower looking structure where the webcam was mounted. For that, a hole had been curved off the vehicle’s default enclosure. Then OpenDaVinci is designed as by a specific degree of complexity, especially when it comes to deployment and use. Although the software has a lot of potential, however it could be made without it. Lastly, Usage of a Linux single board computer, connected to a camera and some kind of microcontroller and sensors, to perform the given tasks.
Conclusion
A future full of driverless cars is just around the corner, and this project motivates lot of potential for future research and development on the subject.
Filed Under: Reviews
Questions related to this article?
👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums.
Tell Us What You Think!!
You must be logged in to post a comment.