Partenaires


Rechercher

Sur ce site

Sur le Web du CNRS


Accueil du site > Structuration Scientifique > Electronique > Architecture des Systèmes Electroniques de vision > Outils de prototypage rapide pour Smart Camera > Fall detection Dataset

Fall detection Dataset

par Antoine Trapet - 27 février 2013

Automatic detection of falls using artificial vision is a particular case of human activities recognition, and can be useful for helping elderly people : according to the Center for Research and Prevention of Injuries report, fall-caused injuries of elderly people in UE-27 are five times as frequent as other injuries causes which reduces considerably their mobility and independence. The fall event, extracted automatically from the video scene represents itself, crucial information that can be used to alert emergency. In this context, visual information on the corresponding scene is highly important in order to take the ``right" decision.

In order to evaluate our automatic fall detection method, we build a dataset in realistic videosurveillance setting using a single camera. The frame rate is 25 frames/s and the resolution is 320x240 pixels. The video data illustrates the main difficulties of realistic video sequences that we can find at an elderly home environment, as well as in a simplest office room. Our video sequences contain variable illumination, and typical difficulties like occlusions or cluttered and textured background. The actors performed various normal daily activities and falls. The dataset contains 191 videos that we annotated, for evaluation purpose, with extra information representing the ground-truth of the fall position in the image sequence. Then, each frame of each video is annotated : the localization of the body is manually defined using bounding boxes. This annotation allows to evaluate the classification features independently from the automatic body detection.

Generally, the few available datasets dedicated to fall detection use the same location for testing and training. Therefore, it does not enable to evaluate the robustness of the method to the location change between traning and testing. In order to evaluate this robustness, we recorded the video of our dataset from different locations, allowing to define several evaluation protocols ("Home", "Coffee room", "Office" and "Lecture room").

README - 732 octets

Office - 1666.2 Mo

Lecture Room - 1809.4 Mo

Home 1 - 949.7 Mo

Coffee room 1 - 1877.8 Mo

Coffee room 2 - 1707.7 Mo

Home 2 - 1152.4 Mo

Office 2 - 67 Mo

Dans la même rubrique :


LE2I - Laboratoire Electronique, Informatique et Image | webmestre : Antoine Trapet | info légales | logo SPIP 2