2014 Looking At People Challenge
ChaLearn Looking at People @ ECCV2014: Challenge and Workshop on Pose Recovery, Action and Gesture Recognition
NEWS: You can take a look at the pictures of the workshop here.
NEWS: You can check a detailed description of the datasets and results here. If your methodology is applied to these datasets and your results are published, the proper reference is: S. Escalera, X. Baró, J. Gonzàlez, M.A. Bautista, M. Madadi, M. Reyes, V. Ponce-López, H.J. Escalante, J. Shotton, I. Guyon, "ChaLearn Looking at People Challenge 2014: Dataset and Results", ECCV Workshops, 2014.
NEWS: You can download the Call for Papers of the Special Issue on Multimodal Human Pose Recovery and Behavior Analysis @ TPAMI.
August 27th: You can download the Call for Papers of the Special Issue on Human Pose Recovery and Behavior Analysis @ TPAMI.
July 25th: You can find the tentative program here.
June 20th: You can download the labels of the final evaluation dataset from the ChaLearn LAP 2014 data server.
June 10th: You can check the results of the Chalearn Looking at People 2014 here.
May 5th: Top ranked participants and best workshop papers will be invited to submit their work at a Special Issue of the TPAMI journal devoted to the Chalearn LAP topics.
May 1st: Release of the encrypted final evaluation data and validation labels here. Participants can start training their methods with the whole data set.
April 24: Beginning of the registration procedure here. All groups must register before May 8th to participate on the final evaluation stage.
February 10: Challenge has started! Tracks competition server is running and can be found at the codalab web.
January 6: We are working in the next ChaLearn 2014 challenge... stay tuned, and have a nice, challenging year! !
ChaLearn organizes in 2014 three parallel challenge tracks on Human Pose Recovery on RGB data, action/interaction spotting on RGB data, and gesture spotting on RGB-Depth data.
The challenge features three quantitative tracts:
Track 1: Human Pose Recovery: More than 8,000 frames of continuous RGB sequences are recorded and labeled with the objective of performing human pose recovery by means of recognizing more than 120,000 human limbs of different people. Examples of labeled frames are shown in Fig. 1.
Track 2: Action/Interaction Recognition: 235 performances of 11 action/interaction categories are recorded and manually labeled in continuous RGB sequences of different people performing natural isolated and collaborative actions randomly. Examples of labeled actions are shown in Fig. 1.
Track 3: Gesture Recognition: More than 14,000 gestures are drawn from a vocabulary of 20 Italian sign gesture categories. The emphasis of this third track is on multi-modal automatic learning of a set of 20 gestures performed by several different users, with the aim of performing user independent continuous gesture spotting. An example of the visual modalities for each frame of the data set is shown in Fig.2.
Figure 1: Samples of the RGB Human Pose Recovery and Action/Interaction tracks
Figure 2: Samples of the RGBD gesture spotting track
Top three ranked participants on each track will be awarded and invited to follow the ECCV workshop submission guide for inclusion of a description of their system at the ECCV proceedings and submit an extended paper in a special issue on gesture recognition at a Special Issue of the TPAMI journal.