Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/31322
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJiang, Fenglingen_UK
dc.contributor.authorKong, Binen_UK
dc.contributor.authorLi, Jingpengen_UK
dc.contributor.authorDashtipour, Kiaen_UK
dc.contributor.authorGogate, Mandaren_UK
dc.date.accessioned2020-06-23T00:03:38Z-
dc.date.available2020-06-23T00:03:38Z-
dc.date.issued2021-01en_UK
dc.identifier.urihttp://hdl.handle.net/1893/31322-
dc.description.abstractSaliency detection aims to automatically highlight the most important area in an image. Traditional saliency detection methods based on absorbing Markov chain only take into account boundary nodes and often lead to incorrect saliency detection when the boundaries have salient objects. In order to address this limitation and enhance saliency detection performance, this paper proposes a novel task-independent saliency detection method based on the bidirectional absorbing Markov chains that jointly exploits not only the boundary information but also the foreground prior and background prior cues. More specifically, the input image is first segmented into number of superpixels, and the four boundary nodes (duplicated as virtual nodes) are selected. Subsequently, the absorption time upon transition node’s random walk to the absorbing state is calculated to obtain the foreground possibility. Simultaneously, foreground prior (as the virtual absorbing nodes) is used to calculate the absorption time and get the background possibility. In addition, the two aforementioned results are fused to form a combined saliency map which is further optimized by using a cost function. Finally, the superpixel-level saliency results are optimized by a regularized random walks ranking model at multi-scale. The comparative experimental results on four benchmark datasets reveal superior performance of our proposed method over state-of-the-art methods reported in the literature. The experiments show that the proposed method is efficient and can be applicable to the bottom-up image saliency detection and other visual processing tasks.en_UK
dc.language.isoenen_UK
dc.publisherSpringer Science and Business Media LLCen_UK
dc.relationJiang F, Kong B, Li J, Dashtipour K & Gogate M (2021) Robust Visual Saliency Optimization Based on Bidirectional Markov Chains. Cognitive Computation, 13 (1), pp. 69-80. https://doi.org/10.1007/s12559-020-09724-6en_UK
dc.rightsThis item has been embargoed for a period. During the embargo please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. This is a post-peer-review, pre-copyedit version of an article published in Cognitive Computation. The final authenticated version is available online at: https://doi.org/10.1007/s12559-020-09724-6en_UK
dc.rights.urihttps://storre.stir.ac.uk/STORREEndUserLicence.pdfen_UK
dc.subjectSaliency detectionen_UK
dc.subjectBidirectional absorbingen_UK
dc.subjectMarkov chainen_UK
dc.subjectBackground and foreground possibilityen_UK
dc.titleRobust Visual Saliency Optimization Based on Bidirectional Markov Chainsen_UK
dc.typeJournal Articleen_UK
dc.rights.embargodate2021-05-30en_UK
dc.rights.embargoreason[FnalSubmission.pdf] Publisher requires embargo of 12 months after formal publication.en_UK
dc.identifier.doi10.1007/s12559-020-09724-6en_UK
dc.citation.jtitleCognitive Computationen_UK
dc.citation.issn1866-9964en_UK
dc.citation.issn1866-9956en_UK
dc.citation.volume13en_UK
dc.citation.issue1en_UK
dc.citation.spage69en_UK
dc.citation.epage80en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusAM - Accepted Manuscripten_UK
dc.contributor.funderthe Fundamental Research Funds for the Central Universities of Chinaen_UK
dc.contributor.funderUniversities Joint Key Laboratory of Photoelectric Detection Science and Technology in Anhui Provinceen_UK
dc.contributor.funderthe Pilot Project of Chinese Academy of Sciencesen_UK
dc.contributor.funderNational Natural Science Foundation of Chinaen_UK
dc.author.emailjingpeng.li@stir.ac.uken_UK
dc.citation.date29/05/2020en_UK
dc.contributor.affiliationChinese Academy of Sciencesen_UK
dc.contributor.affiliationChinese Academy of Sciencesen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationEdinburgh Napier Universityen_UK
dc.identifier.isiWOS:000536324300001en_UK
dc.identifier.scopusid2-s2.0-85085893399en_UK
dc.identifier.wtid1634183en_UK
dc.contributor.orcid0000-0002-6758-0084en_UK
dc.contributor.orcid0000-0001-8651-5117en_UK
dc.date.accepted2020-04-04en_UK
dcterms.dateAccepted2020-04-04en_UK
dc.date.filedepositdate2020-06-22en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeJournal Article/Reviewen_UK
rioxxterms.versionAMen_UK
local.rioxx.authorJiang, Fengling|en_UK
local.rioxx.authorKong, Bin|en_UK
local.rioxx.authorLi, Jingpeng|0000-0002-6758-0084en_UK
local.rioxx.authorDashtipour, Kia|0000-0001-8651-5117en_UK
local.rioxx.authorGogate, Mandar|en_UK
local.rioxx.projectACAIM190302|the Fundamental Research Funds for the Central Universities of China|en_UK
local.rioxx.project2019GDTCZD02|Universities Joint Key Laboratory of Photoelectric Detection Science and Technology in Anhui Province|en_UK
local.rioxx.projectXDA08040109|the Pilot Project of Chinese Academy of Sciences|en_UK
local.rioxx.project913203002|National Natural Science Foundation of China|en_UK
local.rioxx.freetoreaddate2021-05-30en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||2021-05-29en_UK
local.rioxx.licencehttps://storre.stir.ac.uk/STORREEndUserLicence.pdf|2021-05-30|en_UK
local.rioxx.filenameFnalSubmission.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source1866-9964en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
FnalSubmission.pdfFulltext - Accepted Version1.96 MBAdobe PDFView/Open


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.