The visual object tracking VOT2013 challenge results

Matej Kristan, Roman Pflugfelder, Aleš Leonardis, Jiri Matas, Fatih Porikli, Luka Čehovin, Georg Nebehay, Gustavo Fernandez, Tomáš Vojíř, Adam Gatt, Ahmad Khajenezhad, Ahmed Salahledin, Ali Soltani-Farani, Ali Zarezade, Alfredo Petrosino, Anthony Milton, Behzad Bozorgtabar, Bo Li, Chee Seng Chan, Cherkeng HengDale Ward, David Kearney, Dorothy Monekosso, Hakki Can Karaimer, Hamid R. Rabiee, Jianke Zhu, Jin Gao, Jingjing Xiao, Junge Zhang, Junliang Xing, Kaiqi Huang, Karel Lebeda, Lijun Cao, Mario Edoardo Maresca, Mei Kuan Lim, Mohamed ELHelw, Michael Felsberg, Paolo Remagnino, Richard Bowden, Roland Goecke, Rustam Stolkin, Samantha Yue Ying Lim, Sara Maher, Sebastien Poullot, Sebastien Wong, Shin'Ichi Satoh, Weihua Chen, Weiming Hu, Xiaoqin Zhang, Yang Li, Zhiheng Niu

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    206 Citations (Scopus)

    Abstract

    Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge. net).

    Original languageEnglish
    Title of host publicationProceedings - 2013 IEEE International Conference on Computer Vision Workshops, ICCVW 2013
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages98-111
    Number of pages14
    ISBN (Print)9781479930227
    DOIs
    Publication statusPublished - 2013
    Publication typeA4 Article in conference proceedings
    Event2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013 - Sydney, NSW, Australia
    Duration: 1 Dec 20138 Dec 2013

    Conference

    Conference2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013
    Country/TerritoryAustralia
    CitySydney, NSW
    Period1/12/138/12/13

    Keywords

    • Visual object tracking challenge
    • VOT2013

    ASJC Scopus subject areas

    • Software
    • Computer Vision and Pattern Recognition

    Fingerprint

    Dive into the research topics of 'The visual object tracking VOT2013 challenge results'. Together they form a unique fingerprint.

    Cite this