Autonomous vehicles are equipped with a diverse sensor set – cameras, LIDARs, RADARs, and others. They produce a lot of data which has to be prepared for the various perception tasks. This includes appropriate filtering, merging , and time synchronization of sensor data. This data is usually represented as 3D point clouds or images.
It’s important to ensure minimum latency through the full perception pipeline. Our algorithms serve as building blocks with standardized interfaces for your customized pipeline – ensuring minimum latency, real-time capabilities and reliable operation.
Sensors are the eyes and ears of an autonomous vehicle. The collected data is used to identify other traffic participants, like vehicles or pedestrians, and objects which might block the way, such as boxes or cargo lost earlier on the road.
The driveblocks stack provides a variety of implementations for object detection using classical algorithms, such as pointcloud clustering, and data-driven approaches, such as image- or point-cloud based neural networks. The implementations are complemented by a set of tools to adjust them towards your application. This allows you to evaluate the performance of the current setup, improve the pre-trained weights of the neural networks or analyze the execution times.
An autonomous vehicle has to understand where to drive. Lane markings guide the way on the public road, pylons mark the allowed driving corridors in construction sites, and the driving surface in off-road application is highlighted by the color difference of the condensed material.
The driveblocks perception algorithms are completed by a set of neural networks ready to take on the drivable space detection task. They are accompanied by various customization options and allow you to add data, specific to your application and sensor setup to enhance performance.
Driverless vehicles use several sensors of different types and at varying positions. The generated detections have to be combined to build a reliable, safe and accurate representation of the world around it.
Utilize our sensor fusion for driving corridor identification to overcome the limitations provided by high-definition maps. Our technology operates reliably in edge-cases, such as construction sites or off-road applications. We leverage a unique probabilistic fusion technique to combine up to 8 data sources (e.g. driveable space detections in image or pointcloud domain) in real-time.
Precise knowledge of the vehicles speed and position is essential for safe driving behavior. We leverage the strengths of various information sources, such as IMU, wheelspeeds, and visual odometry, to obtain the best possible estimate. In addition, the driveblocks state estimator enables time synchronization for individual sensor sources and fault tolerant fusion which is key to achieve your safety targets.