Autonomous vehicles are equipped with a diverse sensor set – cameras, LIDARs, RADARs, and others. They produce a lot of data which has to be prepared for the various perception tasks. This includes appropriate filtering, merging , and time synchronization of sensor data. This data is usually represented as 3D point clouds or images.
It’s important to ensure minimum latency through the full perception pipeline. Our algorithms serve as building blocks with standardized interfaces for your customized pipeline – ensuring minimum latency, real-time capabilities and reliable operation.
Sensors are the eyes and ears of an autonomous vehicle. The collected data is used to identify other traffic participants, like vehicles or pedestrians, and objects which might block the way, such as boxes or cargo lost earlier on the road.
The driveblocks stack provides a variety of implementations for object detection using classical algorithms, such as pointcloud clustering, and data-driven approaches, such as image- or point-cloud based neural networks. The implementations are complemented by a set of tools to adjust them towards your application. This allows you to evaluate the performance of the current setup, improve the pre-trained weights of the neural networks or analyze the execution times.
An autonomous vehicle has to understand where to drive. Lane markings guide the way on the public road, pylons mark the allowed driving corridors in construction sites, and the driving surface in off-road application is highlighted by the color difference of the condensed material.
The driveblocks perception algorithms are completed by a set of neural networks ready to take on the drivable space detection task. They are accompanied by various customization options and allow you to add data, specific to your application and sensor setup to enhance performance.
Driverless vehicles use several sensors of different types and at varying positions. The generated detections have to be combined to build a reliable, safe and accurate representation of the world around it.
Utilize our sensor fusion for driving corridor identification to overcome the limitations provided by high-definition maps. Our technology operates reliably in edge-cases, such as construction sites or off-road applications. We leverage a unique probabilistic fusion technique to combine up to 8 data sources (e.g. driveable space detections in image or pointcloud domain) in real-time.
Precise knowledge of the vehicles speed and position is essential for safe driving behavior. We leverage the strengths of various information sources, such as IMU, wheelspeeds, and visual odometry, to obtain the best possible estimate. In addition, the driveblocks state estimator enables time synchronization for individual sensor sources and fault tolerant fusion which is key to achieve your safety targets.
After recognizing other traffic participants on the road, an intelligent autonomous vehicle has to solve a complex problem: Predict what they want to do to and where they want to drive. This could be a lane change on a crowded highway or the resolution of an undefined situation at a four-way stop.
We provide you with pre-defined behavior and motion prediction models which can be used standalone or integrated within a larger decision making framework. These are based on a combination of physics- and data-driven prediction to achieve good performance in all circumstances.
Commercial vehicles can face vastly complex situations. While standard highway driving seems like an easy task, edge-cases such as construction sites, obstacles on the road, or other drivers not adhering to traffic-rules can be a significant challenge for an autonomous vehicle. The same kind of surprising and unstructured situations are found in off-road applications, such as mining or container terminals.
The driveblocks motion planning module provides a graph-based algorithm to evaluate thousands of potential future situations and choose the best possible outcome – for every involved traffic participant. Using the intent prediction module, this allows to account for interactions and solve also the most complex dynamic situations.
Moving a heavy truck requires years of experience. Our motion control algorithms encapsulate this in a reliable software component, translating the decisions and plans made by other software components into steering, brake and powertrain commands.
We utilize optimization-based controller designs to account for multiple targets in the driving task, such as energy-efficiency, smooth operation and safety. In particular, this allows to adjust the behavior easily to various operating conditions and speed levels without re-tuning the algorithms.