Please use this identifier to cite or link to this item: https://open.uns.ac.rs/handle/123456789/5383
Title: Unsupervised tube extraction using transductive learning and dense trajectories
Authors: Puscas M.
Sangineto E.
Ćulibrk, Dubravko 
Sebe N.
Issue Date: 17-Feb-2015
Journal: Proceedings of the IEEE International Conference on Computer Vision
Abstract: © 2015 IEEE. We address the problem of automatic extraction of foreground objects from videos. The goal is to provide a method for unsupervised collection of samples which can be further used for object detection training without any human intervention. We use the well known Selective Search approach to produce an initial still-image based segmentation of the video frames. This initial set of proposals is pruned and temporally extended using optical flow and transductive learning. Specifically, we propose to use Dense Trajectories in order to robustly match and track candidate boxes over different frames. The obtained box tracks are used to collect samples for unsupervised training of track-specific detectors. Finally, the detectors are run on the videos to extract the final tubes. The combination of appearance-based static "objectness" (Selective Search), motion information (Dense Trajectories) and transductive learning (detectors are forced to "overfit" on the unsupervised data used for training) makes the proposed approach extremely robust. We outperform state-of-the-art systems by a large margin on common benchmarks used for tube proposal evaluation.
URI: https://open.uns.ac.rs/handle/123456789/5383
ISBN: 9781467383912
ISSN: 15505499
DOI: 10.1109/ICCV.2015.193
Appears in Collections:FTN Publikacije/Publications

Show full item record

SCOPUSTM   
Citations

20
checked on May 6, 2023

Page view(s)

21
Last Week
0
Last month
0
checked on Mar 15, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.