Please use this identifier to cite or link to this item: https://open.uns.ac.rs/handle/123456789/13416
DC FieldValueLanguage
dc.contributor.authorĆulibrk, Dubravkoen
dc.contributor.authorSladojević, Srđanen
dc.contributor.authorRiche N.en
dc.contributor.authorMancas M.en
dc.contributor.authorCrnojević V.en
dc.date.accessioned2020-03-03T14:52:16Z-
dc.date.available2020-03-03T14:52:16Z-
dc.date.issued2012-06-12en
dc.identifier.isbn9780819491282en
dc.identifier.issn0277786Xen
dc.identifier.urihttps://open.uns.ac.rs/handle/123456789/13416-
dc.description.abstractVisual attention deployment mechanisms allow the Human Visual System to cope with an overwhelming amount of visual data by dedicating most of the processing power to objects of interest. The ability to automatically detect areas of the visual scene that will be attended to by humans is of interest for a large number of applications, from video coding, video quality assessment to scene understanding. Due to this fact, visual saliency (bottom-up attention) models have generated significant scientific interest in recent years. Most recent work in this area deals with dynamic models of attention that deal with moving stimuli (videos) instead of traditionally used still images. Visual saliency models are usually evaluated against ground-truth eye-tracking data collected from human subjects. However, there are precious few recently published approaches that try to learn saliency from eye-tracking data and, to the best of our knowledge, no approaches that try to do so when dynamic saliency is concerned. The paper attempts to fill this gap and describes an approach to data-driven dynamic saliency model learning. A framework is proposed that enables the use of eye-tracking data to train an arbitrary machine learning algorithm, using arbitrary features derived from the scene. We evaluate the methodology using features from a state-of-the art dynamic saliency model and show how simple machine learning algorithms can be trained to distinguish between visually salient and non-salient parts of the scene. © 2012 SPIE.en
dc.relation.ispartofProceedings of SPIE - The International Society for Optical Engineeringen
dc.titleData-driven approach to dynamic visual attention modellingen
dc.typeConference Paperen
dc.identifier.doi10.1117/12.923559en
dc.identifier.scopus2-s2.0-84861932244en
dc.identifier.urlhttps://api.elsevier.com/content/abstract/scopus_id/84861932244en
dc.relation.volume8436en
item.grantfulltextnone-
item.fulltextNo Fulltext-
crisitem.author.deptDepartman za industrijsko inženjerstvo i menadžment-
crisitem.author.deptDepartman za industrijsko inženjerstvo i menadžment-
crisitem.author.parentorgFakultet tehničkih nauka-
crisitem.author.parentorgFakultet tehničkih nauka-
Appears in Collections:FTN Publikacije/Publications
Show simple item record

SCOPUSTM   
Citations

1
checked on Sep 9, 2023

Page view(s)

31
Last Week
0
Last month
0
checked on Mar 15, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.