Scale RapidThe fastest way to production-quality labels.
Scale StudioLabeling infrastructure for your workforce.
Scale 3D Sensor FusionAdvanced annotations for LiDAR + RADAR data.
Scale ImageComprehensive annotations for images.
Scale VideoScalable annotations for video data.
Scale TextSophisticated annotations for text-based data.
Scale AudioAudio Annotation and Speech Annotation for NLP.
Scale MappingThe flexible solution to develop your own maps.
Scale NucleusThe mission control for your data
Scale ValidateCompare and understand your models
Scale LaunchShip and track your models in production
Scale Document AITemplate-free ML document processing
Scale Content UnderstandingManage content for better user experiences
Scale SyntheticGenerate synthetic data
Open-source dataset for autonomous driving in wintry weather.
Overview
The CADC dataset aims to promote research to improve self-driving in adverse weather conditions. This is the first public dataset to focus on real world driving data in snowy weather conditions.
It features:
56,000 camera images
7,000 LiDAR sweeps
75 scenes of 50-100 frames each
10 annotation classes
Full sensor suite: 1 LiDAR, 8 Cameras, Post-processed GPS/IMU
Adverse weather driving conditions, including snow
The Autonomoose is an autonomous vehicle platform created as a joint effort between the Toronto Robotics and AI Laboratory (TRAIL) and Waterloo Intelligent Systems Engineering Lab (WISE Lab) at the University of Waterloo. This platform has enabled us to test various software modules to autonomously drive on public roads.
Data Collection
For this dataset, routes were chosen with various levels of traffic, a variety of vehicles and always with snowfall.
Sequences were selected from data collected within the Region of Waterloo, Canada.
Car Setup
We collected data using the Autonomoose, a Lincoln MKZ Hybrid mounted with a full suite of LiDAR, inertial and vision sensors.
Please refer to the figure below for the sensor configuration of the Autonomoose.
Sensor Calibration
To achieve a high quality multi-sensor dataset, it is essential to calibrate the extrinsics and intrinsics of every sensor.
We express extrinsic coordinates relative to the ego frame, i.e. the midpoint of the rear vehicle axle.
The most relevant steps are described below:
LiDAR extrinsics
Camera extrinsics
Camera intrinsic calibration
IMU extrinsics
Data Annotation
Scale’s data annotation platform combines human work and review with smart tools, statistical confidence checks and machine learning checks to ensure the quality of annotations.
The resulting accuracy is consistently higher than what a human or synthetic labeling approach can achieve independently as measured against seven rigorous quality areas for each annotation.
The resulting accuracy is consistently higher than what a human or synthetic labeling approach can achieve independently as measured against seven rigorous quality areas for each annotation.
The CADC includes 3D Bounding boxes for X object classes and a rich set of class attributes related to X, Y. For detailed definitions of each class and example images, please see the annotation instructions.
Cars
Pedestrians
Trucks
Bus
Garbage Containers on Wheels
Traffic Guidance Objects
Bicycle
Pedestrian With Object
Horse and Buggy
Animals
View our paper here and download the development kit.
If you use our dataset please cite our paper.
Download Dataset