YawDD: Yawning Detection Dataset
two video datasets of drivers with various facial characteristics, to
for testing algorithms and models for mainly yawning detection, but
also recognition and tracking of face and mouth. The videos are taken
in real and varying illumination
- In the first dataset a camera is
installed under the front mirror of the car. Each participant has
three/four videos and each video contains different mouth conditions such
as normal, talking/singing, and yawning. This dataset provides 322 videos
consisting of both male and female drivers, with and without
glasses/sunglasses, from different ethnicities, and in 3 different
situations: 1- normal driving (no talking), 2- talking or singing while
driving, and 3- yawning while driving.
the second dataset the camera is installed on the driver’s dash.
Each participant has a single video containing scenes with
driving, driving while talking, and driving while yawning. This datast
provides 29 videos consisting of both male and female drivers, with and
without glasses/sunglasses, from different ethnicities.
Format and Available DataThe
videos are in 640x480 24-bit true color (RGB) 30 frames per second
AVI format wihtout audio. The total data size is about 5 Gigabytes. The availble data and their features are
listed in table 1 for dataset 1 and table 2 for dataset 2, and can be downloaded from ACM MMsys Dataset webpage (see link below),
License and UsageThe videos are for non-commercial and research purposes only! For all other usage, please contact firstname.lastname@example.org. The videos are free for use in non-commerical
and/or academic papers/reports, which study, design, and test algorithms and
methods to detect face, facial features, yawning, etc. In addition,
screenshots of some (not all) videos can be used in such papers.
Please check the Allow Researchers to use picture in their paper column in the above two tables to see if you can use a screenshot of a particular video or not. If
for a particular video that column is “no”, you are NOT allowed to use pictures
from that video in your papers and publications.
To refer to this dataet in your paper, please use the following citation:
S. Abtahi, M. Omidyeganeh,
S. Shirmohammadi, and B. Hariri, “YawDD: A Yawning Detection Dataset”, Proc.
ACM Multimedia Systems, Singapore, March 19 -21 2014, pp. 24-28.
might also be interested to read the following paper, in which
we used the dataset to test our yawning detection algorithm
designed for embedded smart cameras:
M. Omidyeganeh, S. Shirmohammadi,
S. Abtahi, A. Khurshid, M. Farhan, J. Scharcanski, B. Hariri, D. Laroche, and L.
Martel, “Yawning Detection Using Embedded Smart Cameras”, IEEE Trans. on
Instrumentation and Measurement, Vol. 65, Issue 3, March 2016, pp. 570-582.
Download the DatasetThe dataset can be downloaded from ACM Multimedia Systems
Conference Dataset Archive. Alternatively, you can download it form here.
How to Contribute Videos to the DatasetWe welcome the addition of more videos to the dataset. Please send an email to email@example.com
if you would like to add more videos.