Experience our Perception Development Kit

A demonstration and evaluation kit for the driveblocks Autonomy Platform – witness real-world autonomy on your vehicle under your operating condition.
Request the datasheet

Camera detection

Camera detection

Detection and classification of 2D bounding boxes around objects based on camera data. The module supports 18 classes out of the box (e.g. cars, trucks, pedestrians, construction vehicles, and more rare ones like animals). The components are flexible in terms of input image size and resolution. They have been developed with a strong focus on adaptability towards various scenes, performance in diverse conditions such as various lighting and weather situations

Detection and classification of 2D line like objects based on camera data. The modules supports 11 classes out of the box (e.g. white lines, yellow lines, dashed lines, grass, and more).

LIDAR detection

LIDAR detection

This component gives an estimate of potentially large objects which would block the autonomous vehicle from commencing. Importantly, this is purely geometrically based and can detect arbitrarly complex geometric shapes. It is therefore especially suited for things like large and complex machinery, building exteriors or piles of sand and mud.

Outputs a 3D estimate of the ground surface around the vehicle. Gives height and slope estimates and possibilities to classify ground based on slope, e.g. whether its too steep to operate on.

Estimation of 3D bounding boxes based on pointcloud clustering. Clustering is performed based on eucledian distance and motion similarity. This improves performance in dynamic scenes with many objects.

Sensor-Fusion

Sensor-Fusion

A combination of 3D LIDAR bounding boxes and 2D camera detections. Enables object tracking via frames, higher robustness in case of short misdetections and motion estimation. Highly configurable and can support diverse sensor sets without retraining or major modifications.

Generates a lanelet style local map around the vehicle for navigation and planning purposes. The functionality is mainly based on the 2D lane inputs and can serve an arbitrary number of camera inputs.

Foundation model

Models compatible with the driveblocks camera detection modules. These have been pre-trained on a large and diverse database, ranging from construction, to agriculture, mining and on-road scenes.

Work process applications

Work process applications

Generates trajectories based on the behavior of a leader vehicle defining the target behavior of the autonomous vehicle.

Supervises the area near the vehicle and especially in driving direction. Initiates an emergency braking when objects might lead to collisions.

Enhances existing GNSS based planning systems with reactive capabilities.

Enhances existing route-based autonomy systems with reactive capabillities.

Inside the
driveblocks Autonomy Platform.

We accelerate your journey to fully autonomous driving with ready-to-deploy Physical AI and software modules for industrial vehicle autonomy.
Request a demo

What people say about driveblocks

«
The highly modular approach in conjunction with the utilization of open-source technologies makes the driveblocks software platform an ideal framework for autonomous driving in the commercial vehicle sector
Markus Lienkamp | Professor of Automotive Technology at TUM
«
The driveblocks software platform enables OEMs to automate their vehicles in weeks instead of years and allows them to achieve certification targets cost efficiently
Christian Wagner | Investor and Founder in-tech GmbH
«
Modular approach and responsive support by driveblocks allows rapid development of autonomous driving systems while enabling flexible integration with target platforms, whether plugging their modules into existing autonomy stacks, or building upon their modules to create novel solutions.
Simon Thompson | PFLAB lead at TIER IV