Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/27523
Full metadata record
DC FieldValueLanguage
dc.contributor.authorOakes, Matthewen_UK
dc.contributor.authorBhowmik, Deepayanen_UK
dc.contributor.authorAbhayaratne, Charithen_UK
dc.date.accessioned2018-07-20T00:02:51Z-
dc.date.available2018-07-20T00:02:51Z-
dc.date.issued2016-11-30en_UK
dc.identifier.other061624en_UK
dc.identifier.urihttp://hdl.handle.net/1893/27523-
dc.description.abstractImperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.en_UK
dc.language.isoenen_UK
dc.publisherSPIE-Intl Soc Optical Engen_UK
dc.relationOakes M, Bhowmik D & Abhayaratne C (2016) Global motion compensated visual attention-based video watermarking. Journal of Electronic Imaging, 25 (6), Art. No.: 061624. https://doi.org/10.1117/1.jei.25.6.061624en_UK
dc.rights© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.en_UK
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/en_UK
dc.titleGlobal motion compensated visual attention-based video watermarkingen_UK
dc.typeJournal Articleen_UK
dc.identifier.doi10.1117/1.jei.25.6.061624en_UK
dc.citation.jtitleJournal of Electronic Imagingen_UK
dc.citation.issn1560-229Xen_UK
dc.citation.issn1017-9909en_UK
dc.citation.volume25en_UK
dc.citation.issue6en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.contributor.funderEngineering and Physical Sciences Research Councilen_UK
dc.citation.date20/12/2016en_UK
dc.contributor.affiliationUniversity of Buckinghamen_UK
dc.contributor.affiliationSheffield Hallam Universityen_UK
dc.contributor.affiliationUniversity of Sheffielden_UK
dc.identifier.isiWOS:000397059200038en_UK
dc.identifier.scopusid2-s2.0-85007569562en_UK
dc.identifier.wtid928605en_UK
dc.contributor.orcid0000-0003-1762-1578en_UK
dc.date.accepted2016-11-29en_UK
dcterms.dateAccepted2016-11-29en_UK
dc.date.filedepositdate2018-07-06en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeJournal Article/Reviewen_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorOakes, Matthew|en_UK
local.rioxx.authorBhowmik, Deepayan|0000-0003-1762-1578en_UK
local.rioxx.authorAbhayaratne, Charith|en_UK
local.rioxx.projectProject ID unknown|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.freetoreaddate2018-07-06en_UK
local.rioxx.licencehttp://creativecommons.org/licenses/by/3.0/|2018-07-06|en_UK
local.rioxx.filenameOakes et al 2016.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source1560-229Xen_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
Oakes et al 2016.pdfFulltext - Published Version4.46 MBAdobe PDFView/Open


This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.