Beyond centimeter precise high-definition maps – sensor-based environment models for the next generation of autonomous driving systems
High-definition maps are established as a key component in many autonomous driving stacks developed by players in the automotive field. They provide a centimeter precise representation of everything which is of value to the autonomous driving stack: lane markings, traffic signs, a mapping of navigation information to lanes and many more. This rich set of information has made them a valuable asset in accelerating the development of autonomous vehicles.
However, the utilization of HD-maps comes with some caveats:
First, their acquisition is a costly and time-consuming undertaking. Therefore, they tend to limit the operational design domain of an autonomous vehicle to certain segments of the road network. While this does not pose a significant limitation during the development phase, it is going to be a hurdle when scaling up the autonomous driving fleets in the future.
Second, the maps have to be constantly updated which leads to significant data transfer and storage requirements for the autonomous vehicle.
Third, they tend to be inaccurate in situations like construction sites, traffic accident scenes or when objects affect the drivability of certain road segments. This requires the integration of sophisticated fallback strategies.
The driveblocks autonomy software stack is built to overcome these limitations by reducing the requirements on the required map precision significantly. It uses a combination of LIDAR and Camera data to generate features which help to understand the road structure: lane markings, pylons and vertical panels in construction sites, and concrete or metal barriers. Each of these sensor modalities is represented multiple times with different field-of-views (e.g., close and far range cameras, surround view cameras and others). The results are fused to build a reliable and safe model of the road structure ahead (Figure below).
This multi-pipeline approach comes with several advantages:
It can be parallelized to minimize end-to-end latency between sensor data acquisition and availability of the fused road model. This enables high update rates and low response times when the situation in front of the vehicle changes drastically.
The various field-of-views and perspectives of the sensors lead to high robustness against misdetections or occlusions. While one of the sensors might miss a certain information, its much less likely that all of them do so.
The road model fusion utilizes geometric information to combine and validate the features detected by deep learning algorithms earlier in the pipeline. This leads to interpretable results and allows the integration of a safety supervisor.
The split between feature detection and sensor fusion allows to generalize well to various kind of road structures. The fusion algorithm can handle situations with various numbers of lanes, changing lane numbers as well as highway entry and exit situations. In addition, it identifies the width of the lane based on sensor data which allows it to be used within construction sites.
The showcased algorithms are available today within the driveblocks autonomy SDK. They can be adapted to your existing sensor setup to achieve the required detection performance. Our open and collaborative development approach allows you to have full access to the algorithms and adjust them yourselves – or you can work directly with our engineering team to boost your development speed. In addition, the driveblocks autonomy SDK gives you access to new features and enhanced algorithms performance via regular software releases. This allows us to build the sensor-based environment model for the next generation of autonomous trucks and vehicles.