Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/31322
Appears in Collections:Computing Science and Mathematics Journal Articles
Peer Review Status: Refereed
Title: Robust Visual Saliency Optimization Based on Bidirectional Markov Chains
Author(s): Jiang, Fengling
Kong, Bin
Li, Jingpeng
Dashtipour, Kia
Gogate, Mandar
Contact Email: jingpeng.li@stir.ac.uk
Keywords: Saliency detection
Bidirectional absorbing
Markov chain
Background and foreground possibility
Issue Date: Jan-2021
Date Deposited: 22-Jun-2020
Citation: Jiang F, Kong B, Li J, Dashtipour K & Gogate M (2021) Robust Visual Saliency Optimization Based on Bidirectional Markov Chains. Cognitive Computation, 13 (1), pp. 69-80. https://doi.org/10.1007/s12559-020-09724-6
Abstract: Saliency detection aims to automatically highlight the most important area in an image. Traditional saliency detection methods based on absorbing Markov chain only take into account boundary nodes and often lead to incorrect saliency detection when the boundaries have salient objects. In order to address this limitation and enhance saliency detection performance, this paper proposes a novel task-independent saliency detection method based on the bidirectional absorbing Markov chains that jointly exploits not only the boundary information but also the foreground prior and background prior cues. More specifically, the input image is first segmented into number of superpixels, and the four boundary nodes (duplicated as virtual nodes) are selected. Subsequently, the absorption time upon transition node’s random walk to the absorbing state is calculated to obtain the foreground possibility. Simultaneously, foreground prior (as the virtual absorbing nodes) is used to calculate the absorption time and get the background possibility. In addition, the two aforementioned results are fused to form a combined saliency map which is further optimized by using a cost function. Finally, the superpixel-level saliency results are optimized by a regularized random walks ranking model at multi-scale. The comparative experimental results on four benchmark datasets reveal superior performance of our proposed method over state-of-the-art methods reported in the literature. The experiments show that the proposed method is efficient and can be applicable to the bottom-up image saliency detection and other visual processing tasks.
DOI Link: 10.1007/s12559-020-09724-6
Rights: This item has been embargoed for a period. During the embargo please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. This is a post-peer-review, pre-copyedit version of an article published in Cognitive Computation. The final authenticated version is available online at: https://doi.org/10.1007/s12559-020-09724-6
Licence URL(s): https://storre.stir.ac.uk/STORREEndUserLicence.pdf

Files in This Item:
File Description SizeFormat 
FnalSubmission.pdfFulltext - Accepted Version1.96 MBAdobe PDFView/Open



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.