Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/27523
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Oakes, Matthew | en_UK |
dc.contributor.author | Bhowmik, Deepayan | en_UK |
dc.contributor.author | Abhayaratne, Charith | en_UK |
dc.date.accessioned | 2018-07-20T00:02:51Z | - |
dc.date.available | 2018-07-20T00:02:51Z | - |
dc.date.issued | 2016-11-30 | en_UK |
dc.identifier.other | 061624 | en_UK |
dc.identifier.uri | http://hdl.handle.net/1893/27523 | - |
dc.description.abstract | Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking. | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | SPIE-Intl Soc Optical Eng | en_UK |
dc.relation | Oakes M, Bhowmik D & Abhayaratne C (2016) Global motion compensated visual attention-based video watermarking. Journal of Electronic Imaging, 25 (6), Art. No.: 061624. https://doi.org/10.1117/1.jei.25.6.061624 | en_UK |
dc.rights | © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. | en_UK |
dc.rights.uri | http://creativecommons.org/licenses/by/3.0/ | en_UK |
dc.title | Global motion compensated visual attention-based video watermarking | en_UK |
dc.type | Journal Article | en_UK |
dc.identifier.doi | 10.1117/1.jei.25.6.061624 | en_UK |
dc.citation.jtitle | Journal of Electronic Imaging | en_UK |
dc.citation.issn | 1560-229X | en_UK |
dc.citation.issn | 1017-9909 | en_UK |
dc.citation.volume | 25 | en_UK |
dc.citation.issue | 6 | en_UK |
dc.citation.publicationstatus | Published | en_UK |
dc.citation.peerreviewed | Refereed | en_UK |
dc.type.status | VoR - Version of Record | en_UK |
dc.contributor.funder | Engineering and Physical Sciences Research Council | en_UK |
dc.citation.date | 20/12/2016 | en_UK |
dc.contributor.affiliation | University of Buckingham | en_UK |
dc.contributor.affiliation | Sheffield Hallam University | en_UK |
dc.contributor.affiliation | University of Sheffield | en_UK |
dc.identifier.isi | WOS:000397059200038 | en_UK |
dc.identifier.scopusid | 2-s2.0-85007569562 | en_UK |
dc.identifier.wtid | 928605 | en_UK |
dc.contributor.orcid | 0000-0003-1762-1578 | en_UK |
dc.date.accepted | 2016-11-29 | en_UK |
dcterms.dateAccepted | 2016-11-29 | en_UK |
dc.date.filedepositdate | 2018-07-06 | en_UK |
rioxxterms.apc | not required | en_UK |
rioxxterms.type | Journal Article/Review | en_UK |
rioxxterms.version | VoR | en_UK |
local.rioxx.author | Oakes, Matthew| | en_UK |
local.rioxx.author | Bhowmik, Deepayan|0000-0003-1762-1578 | en_UK |
local.rioxx.author | Abhayaratne, Charith| | en_UK |
local.rioxx.project | Project ID unknown|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266 | en_UK |
local.rioxx.freetoreaddate | 2018-07-06 | en_UK |
local.rioxx.licence | http://creativecommons.org/licenses/by/3.0/|2018-07-06| | en_UK |
local.rioxx.filename | Oakes et al 2016.pdf | en_UK |
local.rioxx.filecount | 1 | en_UK |
local.rioxx.source | 1560-229X | en_UK |
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Oakes et al 2016.pdf | Fulltext - Published Version | 4.46 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
A file in this item is licensed under a Creative Commons License
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.