Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/27523
Appears in Collections:Computing Science and Mathematics Journal Articles
Peer Review Status: Refereed
Title: Global motion compensated visual attention-based video watermarking
Author(s): Oakes, Matthew
Bhowmik, Deepayan
Abhayaratne, Charith
Issue Date: 30-Nov-2016
Date Deposited: 6-Jul-2018
Citation: Oakes M, Bhowmik D & Abhayaratne C (2016) Global motion compensated visual attention-based video watermarking. Journal of Electronic Imaging, 25 (6), Art. No.: 061624. https://doi.org/10.1117/1.jei.25.6.061624
Abstract: Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
DOI Link: 10.1117/1.jei.25.6.061624
Rights: © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Licence URL(s): http://creativecommons.org/licenses/by/3.0/

Files in This Item:
File Description SizeFormat 
Oakes et al 2016.pdfFulltext - Published Version4.46 MBAdobe PDFView/Open



This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.