A Retrospect of my Autonomous Driving Knowledge
I was interested in the autonomous driving technology since 2009 where I did my Bachelor thesis at Honda Research Institute Europe in Offenbach. At that time, many students and researchers trying to analyze the camera images using the image processing algorithms and neural networks. During my Master degree, I learned the basic of Computer Vision, Machine Learning, and Robotics. I joined Team Hector Darmstadt for an autonomous mobile robot project and finished my Master study by writing a thesis about object detection using the real laser data sensors.
8 years have passed and last year I join the very excited "Self-Driving Car Nanodegree" online course from Udacity. To be honest, the course fee was not cheap and I had to invest my time about +-20 hours a week. However, the quality and things that I've learned in that course were exceptional.
Last week, I asked my self why do I like to work in the robotic and autonomous driving projects. Was it only a hype in the news or it is really my interest. After I contemplated a while, I see this reasons in my mind:
- I love statistics, matrix calculation, and programming
- I love statistics because I like to see graphs and plots
- I love matrix calculation because I like to see moving objects and how my work can influence the environments
- I like to program because it helps me a lot to do boring things
- I like to make experiments and extracting a conclusion
Those things are essential to keep stay focus during the learning phase of the complex Machine Learning and Robotics algorithms. Without those reasons, I would have taken another topic for my thesis and I wouldn't have finished my 9 months Nanodegree program from Udacity.
Society of Automotive Engineers classified the autonomous driving vehicle in 5 levels. Unfortunately, the mass media have mixed the word autonomous driving that misleads the reader. A fully autonomous driving car is classified as level 5. Currently, we are waiting for a level 4 autonomous car.
A vehicle which can drive itself needs several technologies which are categorized into subsystems. These subsystems are:
- Sensors subsystem
- Perception subsystem
- Planning subsystem
- Control subsystem
{.img-center}
The sensors subsystem includes the sensor hardware (such as lidar, camera, GPS, radar, GPS, and IMU) and its drivers. The perception subsystem is responsible for extracting useful and structured information after processing the sensor data. The planning subsystem consists of the behavior, route planing from point A to point B, prediction of other vehicles and the trajectories planning. The control subsystem is responsible for applying the physical drive of the vehicle which adjusts the steering, the acceleration, and the braking. The below diagram from Udacity visualizes how these subsystems are connected:
{.img-center}
Nowadays, there are two approaches to implement those subsystems: a robotics approach and a deep learning approach. The robotic approach uses the information from the sensors to measure and calculate the vehicle environments and then navigate the vehicle appropriately. On the other hand, the deep learning approach uses a big data of the sensors and other information to learn and to create a pattern for different situations.
During my study, I learned and worked intensively for the robotic approach. The Udacity Nanodegree program allowed me to refresh my robotics knowledge and to learn and practice the latest deep learning approach. You can see my udacity projects in my Github repositories.
These are some results of my projects:
-
Semantic Segmentation with Deep Learning to find free space on the road:
-
Applying Model Predictive Control in a simulation:
-
Simulation of Behavioral Cloning with Deep Learning:
- Traffic light detection and vehicle control in the real car: {.img-center}
This autonomous driving technology will continue in 2018 and I'm very excited to read the technological progress and improvement until an autonomous car level 4 can be produced in series.