http://hdl.handle.net/1893/30351
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Peer Review Status: | Refereed |
Title: | Pointwise and pairwise clothing annotation: combining features from social media |
Author(s): | Nogueira, Keiller Veloso, Adriano Alonso dos Santos, Jefersson Alex |
Contact Email: | keiller.nogueira@stir.ac.uk |
Keywords: | Media Technology Computer Networks and Communications Hardware and Architecture Software Image annotation Clothing annotation Bag of visual words Machine learning Multi-modal Multi-instance Multi-label |
Issue Date: | Apr-2016 |
Date Deposited: | 25-Oct-2019 |
Citation: | Nogueira K, Veloso AA & dos Santos JA (2016) Pointwise and pairwise clothing annotation: combining features from social media. <i>Multimedia Tools and Applications</i>, 75 (7), pp. 4083-4113. https://doi.org/10.1007/s11042-015-3087-2 |
Abstract: | In this paper, we present effective algorithms to automatically annotate clothes from social media data, such as Facebook and Instagram. Clothing annotation can be informally stated as recognizing, as accurately as possible, the garment items appearing in the query photo. This task brings huge opportunities for recommender and e-commerce systems, such as capturing new fashion trends based on which clothes have been used more recently. It also poses interesting challenges for existing vision and recognition algorithms, such as distinguishing between similar but different types of clothes or identifying a pattern of a cloth even if it has different colors and shapes. We formulate the annotation task as a multi-label and multi-modal classification problem: (i) both image and textual content (i.e., tags about the image) are available for learning classifiers, (ii) the classifiers must recognize a set of labels (i.e., a set of garment items), and (iii) the decision on which labels to assign to the query photo comes from a set of instances that is used to build a function, which separates labels that should be assigned to the query photo, from those that should not be assigned. Using this configuration, we propose two approaches: (i) the pointwise one, called MMCA, which receives a single image as input, and (ii) a multi-instance classification, called M3CA, also known as pairwise approach, which uses pair of images to create the classifiers. We conducted a systematic evaluation of the proposed algorithms using everyday photos collected from two major fashion-related social media, namely pose.com and chictopia.com. Our results show that the proposed approaches provide improvements when compared to popular first choice multi-label, multi-modal, multi-instance algorithms that range from 20 % to 30 % in terms of accuracy. |
DOI Link: | 10.1007/s11042-015-3087-2 |
Rights: | The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. |
Licence URL(s): | http://www.rioxx.net/licenses/under-embargo-all-rights-reserved |
File | Description | Size | Format | |
---|---|---|---|---|
Nogueira et al-MTA-2016.pdf | Fulltext - Published Version | 3.25 MB | Adobe PDF | Under Permanent Embargo Request a copy |
Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.