Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/27414
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Peer Review Status: | Refereed |
Title: | Visual Attention-Based Image Watermarking |
Author(s): | Bhowmik, Deepayan Oakes, Matthew Abhayaratne, Charith |
Keywords: | Visual saliency wavelet watermarking robustness subjective test |
Issue Date: | 31-Dec-2016 |
Date Deposited: | 20-Jun-2018 |
Citation: | Bhowmik D, Oakes M & Abhayaratne C (2016) Visual Attention-Based Image Watermarking. IEEE Access, 4, pp. 8002-8018. https://doi.org/10.1109/access.2016.2627241 |
Abstract: | Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model. |
DOI Link: | 10.1109/access.2016.2627241 |
Rights: | © 2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
07740049.pdf | Fulltext - Accepted Version | 2.64 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.