Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/30351
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNogueira, Keilleren_UK
dc.contributor.authorVeloso, Adriano Alonsoen_UK
dc.contributor.authordos Santos, Jefersson Alexen_UK
dc.date.accessioned2019-10-29T01:03:08Z-
dc.date.available2019-10-29T01:03:08Z-
dc.date.issued2016-04en_UK
dc.identifier.urihttp://hdl.handle.net/1893/30351-
dc.description.abstractIn this paper, we present effective algorithms to automatically annotate clothes from social media data, such as Facebook and Instagram. Clothing annotation can be informally stated as recognizing, as accurately as possible, the garment items appearing in the query photo. This task brings huge opportunities for recommender and e-commerce systems, such as capturing new fashion trends based on which clothes have been used more recently. It also poses interesting challenges for existing vision and recognition algorithms, such as distinguishing between similar but different types of clothes or identifying a pattern of a cloth even if it has different colors and shapes. We formulate the annotation task as a multi-label and multi-modal classification problem: (i) both image and textual content (i.e., tags about the image) are available for learning classifiers, (ii) the classifiers must recognize a set of labels (i.e., a set of garment items), and (iii) the decision on which labels to assign to the query photo comes from a set of instances that is used to build a function, which separates labels that should be assigned to the query photo, from those that should not be assigned. Using this configuration, we propose two approaches: (i) the pointwise one, called MMCA, which receives a single image as input, and (ii) a multi-instance classification, called M3CA, also known as pairwise approach, which uses pair of images to create the classifiers. We conducted a systematic evaluation of the proposed algorithms using everyday photos collected from two major fashion-related social media, namely pose.com and chictopia.com. Our results show that the proposed approaches provide improvements when compared to popular first choice multi-label, multi-modal, multi-instance algorithms that range from 20 % to 30 % in terms of accuracy.en_UK
dc.language.isoenen_UK
dc.publisherSpringer Science and Business Media LLCen_UK
dc.relationNogueira K, Veloso AA & dos Santos JA (2016) Pointwise and pairwise clothing annotation: combining features from social media. <i>Multimedia Tools and Applications</i>, 75 (7), pp. 4083-4113. https://doi.org/10.1007/s11042-015-3087-2en_UK
dc.rightsThe publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.en_UK
dc.rights.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserveden_UK
dc.subjectMedia Technologyen_UK
dc.subjectComputer Networks and Communicationsen_UK
dc.subjectHardware and Architectureen_UK
dc.subjectSoftwareen_UK
dc.subjectImage annotationen_UK
dc.subjectClothing annotationen_UK
dc.subjectBag of visual wordsen_UK
dc.subjectMachine learningen_UK
dc.subjectMulti-modalen_UK
dc.subjectMulti-instanceen_UK
dc.subjectMulti-labelen_UK
dc.titlePointwise and pairwise clothing annotation: combining features from social mediaen_UK
dc.typeJournal Articleen_UK
dc.rights.embargodate2999-12-31en_UK
dc.rights.embargoreason[Nogueira et al-MTA-2016.pdf] The publisher does not allow this work to be made publicly available in this Repository therefore there is an embargo on the full text of the work.en_UK
dc.identifier.doi10.1007/s11042-015-3087-2en_UK
dc.citation.jtitleMultimedia Tools and Applicationsen_UK
dc.citation.issn1573-7721en_UK
dc.citation.issn1380-7501en_UK
dc.citation.volume75en_UK
dc.citation.issue7en_UK
dc.citation.spage4083en_UK
dc.citation.epage4113en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.contributor.funderBrazilian National Research Councilen_UK
dc.author.emailkeiller.nogueira@stir.ac.uken_UK
dc.citation.date08/12/2015en_UK
dc.contributor.affiliationFederal University of Minas Geraisen_UK
dc.contributor.affiliationFederal University of Minas Geraisen_UK
dc.contributor.affiliationFederal University of Minas Geraisen_UK
dc.identifier.isiWOS:000373172200025en_UK
dc.identifier.scopusid2-s2.0-84949486301en_UK
dc.identifier.wtid1469466en_UK
dc.contributor.orcid0000-0003-3308-6384en_UK
dc.date.accepted2015-11-17en_UK
dcterms.dateAccepted2015-11-17en_UK
dc.date.filedepositdate2019-10-25en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeJournal Article/Reviewen_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorNogueira, Keiller|0000-0003-3308-6384en_UK
local.rioxx.authorVeloso, Adriano Alonso|en_UK
local.rioxx.authordos Santos, Jefersson Alex|en_UK
local.rioxx.projectProject ID unknown|Brazilian National Research Council|en_UK
local.rioxx.freetoreaddate2265-11-09en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||en_UK
local.rioxx.filenameNogueira et al-MTA-2016.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source1573-7721en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
Nogueira et al-MTA-2016.pdfFulltext - Published Version3.25 MBAdobe PDFUnder Permanent Embargo    Request a copy


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.