VSB100: A Unified Video Segmentation Benchmark

Video segmentation research is currently limited by the lack of a benchmark dataset that covers the large variety of subproblems appearing in video segmentation and that is large enough to avoid overfitting. Consequently, there is little analysis of video segmentation which generalizes across subtasks, and it is not yet clear which and how video segmentation should leverage the information from the still-frames, as previously studied in image segmentation, alongside video specific information, such as temporal volume, motion and occlusion. In this work we provide such an analysis based on annotations of a large video dataset, where each video is manually segmented by multiple persons. Moreover, we introduce a new volume-based metric that includes the important aspect of temporal consistency, that can deal with segmentation hierarchies, and that reflects the tradeoff between over-segmentation and segmentation accuracy.


Paper, Supplementary material

(the reported paper differs slightly - just BPR values - from the published copy due to boundary thinning correction)


Benchmark code

Matlab source code (ver. 1.3)


Train annotations

*General benchmark, dense annotations, full resolution

*General benchmark, dense annotations, half resolution

General benchmark, full resolution

General benchmark, half resolution

Motion segmentation benchmark, full resolution

Motion segmentation benchmark, half resolution

Non-rigid motion segmentation benchmark, full resolution

Non-rigid motion segmentation benchmark, half resolution

Camera motion segmentation benchmark, full resolution

Camera motion segmentation benchmark, half resolution

*dense annotations regard the temporal dimension and refer to a finer sampling (up to every frame) of frames around the central one.


Test annotations

General benchmark, full resolution

General benchmark, half resolution

Motion segmentation benchmark, full resolution

Motion segmentation benchmark, half resolution

Non-rigid motion segmentation benchmark, full resolution

Non-rigid motion segmentation benchmark, half resolution

Camera motion segmentation benchmark, full resolution

Camera motion segmentation benchmark, half resolution


Video sequences (Berkeley video segmentation dataset. Sundberg et al. CVPR'11)

Link to files at MPI

Train

Test


Video sequences (Berkeley video segmentation dataset. Sundberg et al. CVPR'11)

Link to files at Berkeley

Part 1

Part 2



Activity on VSB100 and bug reports

We appreciate your consideration of our work. In order to keep the benchmark a lively comparison arena, we will maintain below a brief log of activities, bug reports and critics. Please email us for any comments.

06.08.14 parallelized benchmark code (code ver. 1.3): the estimation of the BPR and VPR benchmark metrics as well as the length statistics is now parallelized. The for loops in the functions "Benchmarksegmevalparallel" and "Benchmarkevalstatsparallel" use the parallel Matlab "parfor" command. Once multiple Matlab workers are initiated (cf. the Matlab instructions for parfor), every video is benchmarked in parallel.

01.08.14 temporally-denser training annotations: we have collected annotations more densely for the frames around the central one, in relation to our GCPR 2014 work: "Learning Must-Link Constraints for Video Segmentation based on Spectral Clustering" [pdf]. (Please consider citing also this work if you are using them.) The new annotations regard only the general task.

28.02.14 video naming and subfolders (code ver. 1.2): the benchmark annotations and the source code of version 1.0 used a simplified naming for the BVDS videos. To reduce confusion we adopted the naming of the original BVDS. Note: the new code is compatible with the previous name notation and video structure, so you can keep using the (previously) downloaded labels with the new code. We are available to provide support.

15.02.14 we have corrected a naming issue (frame number) for the training sequence "galapagos", which only applied to motion and non-rigid motion labels.

13.02.14 bug report by Pablo Arbelaez: the ultrametric contour maps (UCM) which we computed from the annotations (just used in the BPR) were not thinned. That resulted in a slower boundary metric computation and slightly different BPR measures. We have now corrected the issue both in the paper and in the distributed code.


Project page updated on August, 6th 2014