You must register and agree the data licence. If you already have access to HumanEva datasets you can use your existing login and password to download the baseline evaluation code.
You must register and agree the data licence.
To encourage a common venue for reporting the results on the data, we allow full access to the data to researchers that plan to participate in NIPS 2006 Workshop and/or in the following the workshop special issue of International Journal of Computer Vision (IJCV) devoted to the subject. All uses beyond these venues are currently disallowed. Once the special issue of the IJCV is in press, we will allow full and unrestricted use of the data to the research community.
"I" stands for the first incarnation of this dataset. Our hope is that this dataset will evolve in the future to be more diverse.
No. You are free to use your own data if you like. Learning from the supplied data however will likely provide viable priors on the motions and the appearance of the subjects in the dataset.
We realize that there is a lot of test data and current state of the art algorithms are still relatively slow to require everyone to run on all the data. While we encourage the groups to run on as much data as possible, we are currently in the process of compiling a priority list for the test data. This will ensure that all participant are able to run on at least the same sub-set of the test data.
|Number of video cameras||7||4|
|Types of video cameras||3 color + 4 grayscale||4 color|
|Number of motion capture cameras||6||8|
|Types of data||Training, Validation, Testing||Testing|
Uncompressed data that we collected was ~500 GB. Making that data available over the web given the current bandwidth limitations seemed unreasonable. XviD codec provided us with a reasonable compression without any visible artifacts. We ensured the options we used for compression resulted in the highest quality video possible. XviD codec is also freely available, and has libraries for C/C++.
Calibration is done in (mm) with the origin on the floor and roughly in the center of the capture space. The 3D error in pose is computed in (mm) as well. The 2D pose error is computed in (pixels) for convenience.