Self-Driving Vehicle System
LIDAR (Light Detection and Ranging) and cameras are some of the critical hardware sensors supporting the success of self-driving autonomous vehicle systems. These “eyes” of the autonomous vehicle system pull in raw data, which must then be safely and accurately interpreted.
We achieved state-of-the-art results in odometry estimation by applying Machine Learning algorithms for object detection and segmentation (cars, pedestrians, road markup, etc). In order to refine the estimations obtained from algorithms based on LIDAR data, we added fusion with data obtained from GPS/IMU/telemetry and used cameras as an additional input to make it full 3D (6-DOF).
Moreover, we applied generative adversarial network and capsule network models to learn POV-invariant descriptors of local surface features. This helped to safely detect road signs and signals during inclement conditions, such as a rainy night.
- Sensor fusion (LIDAR, IMU, GPS, Camera)
- Odometry Estimation
- Object Segmentation
- Capsule Network
- Robot Operating System(ROS)
- Object Recognition
- Generative Adversarial Network
- Performance Optimization
- Real Time Systems
- Tensorflow Serving