AevaScenes Accelerates Innovation in Motion Forecasting and Scene Flow

Aeva has launched AevaScenes, thought to be the world’s first open dataset providing synchronized, multi-sensor frequency-modulated continuous wave (FMCW) 4D lidar and camera data, complete with object velocity measurements, semantic segmentation, tracking, and lane-line annotations.

Futuristic 3D Concept Car. Autonomous Self Driving Van Moving Through City Highway. Visualized AI Sensors Scanning Road Ahead for Speed Limits, Vehicles, Pedestrians. Back View. Image Credit: Gorodenkoff/Shutterstock.com

The open-access dataset is designed to accelerate research in autonomous vehicle perception, supporting innovation in object detection, motion forecasting, scene flow, and trajectory estimation. It is now freely available for academic and non-commercial purposes.

The high-fidelity FMCW lidar data offers researchers and developers precise, densely detailed range sensing, enabling the capture of depth and velocity information even in challenging driving environments.

Rich multimodal sensor fusion integrates FMCW 4D lidar with high-resolution camera imagery; this dataset facilitates research in detection, segmentation, tracking, sensor calibration, and innovative perception tasks.

Aeva’s interactive sensor diagram illustrates wide and narrow fields of view for lidar and camera systems, allowing users to examine sensor features and select configurations most appropriate for their research requirements.

The ultra-long-range annotations offer what is claimed to be the world’s first dataset featuring ultra-long-range annotations for object detection, semantic segmentation, and lane detection at distances reaching up to 400 m.

AevaScenes is the first dataset to bring together long-range FMCW lidar with velocity information and rich camera data, creating a new benchmark for perception research. By opening access to this level of fidelity and scale, we’re giving researchers the tools to push the boundaries of what’s possible in autonomous driving, whether that’s advancing detection and tracking or unlocking entirely new approaches to motion understanding.

James Reuther, Chief Engineer, Aeva

Key Features

AevaScenes features 100 carefully selected sequences recorded in and around the San Francisco Bay Area, encompassing urban and highway driving in various day and night conditions. This dataset comprises 10,000 frames of time-synchronized FMCW lidar and RGB camera data at a frequency of 10 Hz, gathered using Aeva’s Mercedes Metris test vehicles.

The sensor suite is composed of six FMCW lidar sensors (four with a wide field of view and two with a narrow field of view) and six high-resolution RGB cameras with corresponding fields of view. The cameras capture 4 K RGB images at a rate of 10 frames per second.

Data is available in PCD point clouds, JPEG images, and JSON annotations, amounting to around 200 GB (2 GB for each sequence). The dataset is evenly distributed between highway and urban settings, as well as day and night conditions, with all sequences recorded in clear weather on dry roads.

What This Means for Research 

By opening AevaScenes to the global research community, Aeva is setting a new technical standard. The combination of FMCW lidar’s precision and camera-based context could redefine how machines perceive motion, depth, and intent. For autonomous systems, that means not just seeing the road ahead, but also understanding it. 

Source:

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.