Overview

The goal of the competition is to provide an evaluation for state-of-the-arts on human identification at a distance (HID). The competition and workshop is endorsed by IAPR Technical Committee on Biometrics (TC4). The workshop will be hosted in conjunction with the Asian Conference on Computer Vision (ACCV 2020) from Nov 30 – Dec 4, 2020.

The winners will report their methods and results during the workshop. Some experts on HID will also be invited to give talks. The workshop will attract researchers on gait recognition and person identification to attend. We surely believe that the workshop will be successful and promote the research on HID.

The dataset proposed for the competition will be CASIA-E. It contains 1014 subjects and hundreds of video sequences for each subject. We randomly selected 10 sequences for each subject for the competition. We provide human body silhouettes, and the silhouettes have been normalized to a fixed size (128 x 128) for convenience.

The training set contains 500 subjects, the first 500 ones in the dataset. 25% of the sequences in the last 514 subjects will be put into the validation set. The remaining 75% sequences of the last 514 subjects will be the test set.

How to join this competition?

The competition is open to anyone who is concerned about biometric technology.

And it will be hosted in CodaLab HID 2020, where you can submit results and get timely feedback.

Important Dates

Deadline of first phase: October 15, 2020
Deadline of second phase: October 25, 2020
Competition results announcement: October 30, 2020
Method description submission: November 10, 2020
Workshop: Half day on December 3, 2020

Dataset

How to get the data set?

Various data set download options are provided below:

  1. OneDrive
  2. Google Drive.
  3. Baidu Drive password: 5pu7

The specific production process is: use cameras of different heights and viewing angles to collect video segments of people walking, and then obtain human silhouette images through human body detection and human body segmentation algorithms, and finally normalize these images to a uniform size.

In this competition, you are asked to predict the subject ID of the human walking video. The training set is available in the train/ folder, with corresponding subject ID in train.csv. During the test, the performance measurement method is the gallery-probe mode, which is commonly used in face recognition. Therefore, the test set consists of two parts, gallery set and probe set, which can be found in test_gallery/ and test_probe/ respectively. In addition, the subjects in the training set and the test set are completely different.

How to identify human in the test process?

Human body identification

Each subject in the test set has a video in the gallery set, which will be used as a template. And you are asked to predict the subject ID of the video in a probe set based on gallery data. The usual identification method is to calculate the L2 distance between the probe and the gallery. Of course, you can also use other methods to calculate the similarity between the two.

File descriptions

  • train.csv – the training set label contains the ID corresponding to the video in the training set
  • train/ – contains the training data, its file organization is ./train/subject_ID/video_ID/image_data.

  • test_probe/ – contains probe data, its file organization is ./test_probe/video_ID/image_data.

  • test_gallery/ – contains gallery data, its file organization is ./test_gallery/subject_ID/video_ID/image_data

  • SampleSubmission.zip – a sample submission. Note that the submission.csv must be placed in a zip archive and have the same file name. It Contains two columns of data, videoID, and subjectID. Every video in ./test_probe will require a prediction of subject ID. Finally, the result will be filled in this file. Note that the subjectID is int format.

Competition sample code

The sample code can be found at this Github project. It can achieve about 20% accuracy. And the model structure refers to the following paper, please cite this paper, if it help your research:

@article{zhang2019comprehensive,
title={A comprehensive study on gait biometrics using a joint CNN-based method},
author={Zhang, Yuqi and Huang, Yongzhen and Wang, Liang and Yu, Shiqi},
journal={Pattern Recognition},
volume={93},
pages={228--236},
year={2019},
publisher={Elsevier}
}

Committee

Advisory Committee

  • Prof. Tieniu Tan, Institute of Automation, Chinese Academy of Sciences, China
  • Prof. Yasushi Yagi, Osaka University, Japan
  • Prof. Mark Nixon, University of Southampton, UK

Organizers

  • Prof. Shiqi Yu, Southern University of Science and Technology, China
  • Prof. Liang Wang, Institute of Automation, Chinese Academy of Sciences, China
  • Prof. Yongzhen Huang, Institute of Automation, Chinese Academy of Sciences, China; Watrix technology co. ltd, China
  • Prof. Yasushi Makihara, Osaka University, Japan
  • Prof. Nicolás Guil, University of Málaga, Spain
  • Prof. Manuel J. Marín-Jiménez, University of Córdoba, Spain
  • Dr. Edel B. García Reyes, Shenzhen Institute of Artificial Intelligence and Robotics for Society, China
  • Prof. Feng Zheng, Southern University of Science and Technology, China
  • Prof. Md. Atiqur Rahman Ahad, University of Dhaka, Bangladesh; Osaka University, Japan

FAQ

Q: Can I use data outside of the training set to train my model?
A: Yes, you can. But you must describe what data you use and how to use it in the description of the method.

Q: How many members can my team have?
A: We do not limit the numbers in your team.

Q: Who cannot participate the competition?
A: The members in the organizers’ research group cannot participate the competition. The employees and interns at the sponsor company cannot participate the competition.

Q: Why are there empty folders in datasets?
A: Since the dataset generated randomly, it contains some empty files. You can refer the sample code to skip these folders.

Q: How to contact us?
A: For any request or information, please send an email to: Prof. Shiqi Yu, yusq@sustech.edu.cn

Leaderboard

The First Prize

Team ID: BeibeiLin

Method description: pdf

Team members:

GitHub Link:github.com/bb12346/GaitGL

The Second Prize

Team ID: brl

Method description: pdf

Team members:

The Third Prize

Team ID: panfengzhang

Method description: pdf

Team members:

The Fouth Prize

Team ID: ctsu-ca

Method description: pdf

Team members:

GitHub Link:github.com/panjicaiz/HID2020_GaitSet

Appendix: Top 10 best teams

* One of the members from Team jilongwang is from CRIPAC, Institute of Automation, Chinese Academy of Sciences which is the provider of the dataset CASIA-E. Due to the conflict of interest, the results from that team were just for evaluation and were excluded from the competition ranking.

Rank ID Accuracy
* jilongwang 66.7%
1 BeibeiLin 63.0%
2 brl 54.1%
3 panfengzhang 53.4%
4 ctsu-ca 51.5%
5 ywang26 50.3%
6 Wbz 49.3%
7 HeHaodi 49.2%
8 kouen93 47.9%
9 recognizer 43.6%
10 color 33.1%

The full complete competition results can be found at https://competitions.codalab.org/competitions/26085#results


HID 2020 Program

Date: December 3, 2020

Time: 09:30AM-12:20PM, Japan Standard Time, UTC+9, (08:30AM-11:20AM, China Standard Time, UTC+8)

This workshop will be held online. The address of the meeting room:

https://meet.google.com/vjb-xzfh-gsc
9:30-9:45Welcome video from organizersVideo[YouTube][Bilibili]
9:45-9:50Welcome remarks and introduction to the competition Speaker: Prof. Shiqi YuVideo[YouTube][Bilibili]
9:50-10:10Invited talk from the sponsor, Watrix Co. Ltd
Speaker: Prof. Yongzhen Huang
Video(15m) + Live Q&A(5m),[Video]
10:10-10:50Keynote: Identifying people by their gait: a summary of progress
Speaker: Prof. Mark Nixon
Video(30m) + Live Q&A(10m),[Slide][YouTube][Bilibili]

This image has an empty alt attribute; its file name is IMG_3796.jpg
Online meeting with Prof. Mark Nixon
10:50-11:00Awards ceremonyLive
11:00-11:204th Prize talk: Optimization of GaitSet for Gait RecognitionTeam Members: Jicai Pan, Hao Sun, Yi Wu, Shi Yin, Shangfei Wang (University of Science and Technology of China, China)Speaker: Mr. Hao SunVideo(15m) + Live Q&A(5m),[Slide][YouTube][Bilibili]
11:20-11:403rd Prize talk: Multi-grid Spatial and Temporal Feature Fusion for Human Identification at a DistanceTeam members: Panfeng Zhang, Zhiqiang Song, Xianglei Xing (Harbin Engineering University, China)Speaker: Mr. Zhiqiang SongVideo(15m) + Live Q&A(5m),[Slide][YouTube][Bilibili]
11:40-12:002nd Prize talk: Temporal Proposal Module for Human Identification at a DistanceTeam members: Qijun Zhao, Tao Ding, Yuchao Yang, Shuiwang Li (Sichuan University, China)Speaker: Mr. Tao DingVideo(15m) + Live Q&A(5m),[Slide][YouTube][Bilibili]
12:00-12:201st Prize Talk: Learning Effective Representations from Global and Local Features for Cross-View Gait RecognitionTeam members: Beibei Lin1, Shunli Zhang1, Xin Yu2, Chuihan Kong1, Chenwei Wan1 (1. Beijing Jiaotong University, China; 2. University of Technology Sydney, Australia)Speaker: Mr. Chenwei Wan1  and Mr. Beibei Lin1Video(15m) + Live Q&A(5m),[Slide][YouTube][Bilibili]

This image has an empty alt attribute; its file name is IMG_3824.jpg
Top 4 Winner
Dr. Chunshui Cao(Left) and Hao Shun(Right)

This image has an empty alt attribute; its file name is IMG_3829.jpg
Top 2 Winner
Dr. Chunshui Cao(Left) and Tao Ding(Right)
This image has an empty alt attribute; its file name is IMG_3827.jpg
Top 3 Winner
Pangfeng Zhang(Left), Dr. Chunshui Cao(Middle) and Zhiqiang Song(Right)
This image has an empty alt attribute; its file name is IMG_3831.jpg
Top 1 Winner
Dr. Chunshui Cao(Left) and Beibei Lin(Right)
This image has an empty alt attribute; its file name is IMG_3877.jpg
Group photo of all participants.

Acknowledgments

We would like to thank the Institute of Automation, Chinese Academy of Sciences for providing the dataset CASIA-E for the competition.