884 research outputs found

    A Latent Clothing Attribute Approach for Human Pose Estimation

    Full text link
    As a fundamental technique that concerns several vision tasks such as image parsing, action recognition and clothing retrieval, human pose estimation (HPE) has been extensively investigated in recent years. To achieve accurate and reliable estimation of the human pose, it is well-recognized that the clothing attributes are useful and should be utilized properly. Most previous approaches, however, require to manually annotate the clothing attributes and are therefore very costly. In this paper, we shall propose and explore a \emph{latent} clothing attribute approach for HPE. Unlike previous approaches, our approach models the clothing attributes as latent variables and thus requires no explicit labeling for the clothing attributes. The inference of the latent variables are accomplished by utilizing the framework of latent structured support vector machines (LSSVM). We employ the strategy of \emph{alternating direction} to train the LSSVM model: In each iteration, one kind of variables (e.g., human pose or clothing attribute) are fixed and the others are optimized. Our extensive experiments on two real-world benchmarks show the state-of-the-art performance of our proposed approach.Comment: accepted to ACCV 2014, preceding work http://arxiv.org/abs/1404.492

    Deep View-Sensitive Pedestrian Attribute Inference in an end-to-end Model

    Full text link
    Pedestrian attribute inference is a demanding problem in visual surveillance that can facilitate person retrieval, search and indexing. To exploit semantic relations between attributes, recent research treats it as a multi-label image classification task. The visual cues hinting at attributes can be strongly localized and inference of person attributes such as hair, backpack, shorts, etc., are highly dependent on the acquired view of the pedestrian. In this paper we assert this dependence in an end-to-end learning framework and show that a view-sensitive attribute inference is able to learn better attribute predictions. Our proposed model jointly predicts the coarse pose (view) of the pedestrian and learns specialized view-specific multi-label attribute predictions. We show in an extensive evaluation on three challenging datasets (PETA, RAP and WIDER) that our proposed end-to-end view-aware attribute prediction model provides competitive performance and improves on the published state-of-the-art on these datasets.Comment: accepted BMVC 201

    Multi-task deep learning models in visual fashion understanding

    Get PDF
    Visual fashion understanding (VFU) is a discipline which aims to solve tasks related to clothing recognition, such as garment categorization, garment’s attributes prediction or clothes retrieval, with the use of computer vision algorithms trained on fashion-related data. Having surveyed VFU- related scientific literature, I conclude that, because of the fact that at the heart of all VFU tasks is the same issue of visually understanding garments, those VFU tasks are in fact related. I present a hypothesis that building larger multi-task learning models dedicated to predicting multiple VFU tasks at once might lead to better generalization properties of VFU models. I assess the validity of my hypothesis by implementing two deep learning solutions dedicated primarily to category and attribute prediction. First solution uses multi-task learning concept of sharing features from ad- ditional branch dedicated to localization task of landmarks’ position prediction. Second solution does not share knowledge from localization branch. Comparison of those two implementations con- firmed my hypothesis, as sharing knowledge between tasks increased category prediction accuracy by 53% and attributes prediction recall by 149%. I conclude that multi-task learning improves generalization properties of deep learning-based visual fashion understanding models across tasks

    A Review: Analysis of Image Feature Selection Using Group Structure

    Get PDF
    Online determination of element elements has pulled in escalated enthusiasm for late years. Then again, existing online component choice techniques assess highlights independently and disregard the basic structure of an element stream. Case in point, in picture examination, elements are created in gatherings which speak to shading, surface, and other visual data. Essentially softening the gathering structure up highlight determination may corrupt execution. Propelled by this perception, we figure the issue as an online gathering highlight determination. The issue accept that elements are created separately however there are gathering structures in the element stream. To the best of our insight, this is the first occasion when that the connection among gushing components has been considered in the online element determination process. To take care of this issue, we add to a novel online gathering highlight choice technique named OGFS. Our proposed methodology comprises of two stages: online intra-bunch determination and online between gathering choice. In the intra-bunch choice, we outline a paradigm in light of ghostly investigation to choose discriminative elements in every gathering. In the between gathering determination, we use a straight relapse model to choose an ideal subset. This two-stage technique proceeds until there are no more elements arriving or some predefined ceasing conditions are met. At last, we apply our system to numerous errands including picture order and face check. Broad observational studies performed on true and benchmark information sets show that our strategy beats other cutting edge online element choice techniques

    A Federated Approach for Fine-Grained Classification of Fashion Apparel

    Full text link
    As online retail services proliferate and are pervasive in modern lives, applications for classifying fashion apparel features from image data are becoming more indispensable. Online retailers, from leading companies to start-ups, can leverage such applications in order to increase profit margin and enhance the consumer experience. Many notable schemes have been proposed to classify fashion items, however, the majority of which focused upon classifying basic-level categories, such as T-shirts, pants, skirts, shoes, bags, and so forth. In contrast to most prior efforts, this paper aims to enable an in-depth classification of fashion item attributes within the same category. Beginning with a single dress, we seek to classify the type of dress hem, the hem length, and the sleeve length. The proposed scheme is comprised of three major stages: (a) localization of a target item from an input image using semantic segmentation, (b) detection of human key points (e.g., point of shoulder) using a pre-trained CNN and a bounding box, and (c) three phases to classify the attributes using a combination of algorithmic approaches and deep neural networks. The experimental results demonstrate that the proposed scheme is highly effective, with all categories having average precision of above 93.02%, and outperforms existing Convolutional Neural Networks (CNNs)-based schemes.Comment: 11 pages, 4 figures, 5 tables, submitted to IEEE ACCESS (under review

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on üks põhilistest teenustest, mille pakkumine võib suurendada rõivapoodide edukust, sest tänu sellele lahendusele väheneb füüsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem välja pakutud masinnägemise ja graafika meetoditel õnnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaõnnestunud põhiliselt seetõttu, et ei ole suudetud korralikult arvesse võtta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. Käesolev projekt kavatseb kõrvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. Välja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analüüsimises, modelleerimises, mõõtmete arvutamises, orientiiride paigutamises, mannekeenidelt võetud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti käigus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati välja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise süsteemi täiendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    Landmarks for clothing retrieval

    Get PDF
    Clothing Retrieval is a task that is increasingly becoming popular with the rise of online shopping and social media’s popularity. We propose to solve the clothing retrieval problem using landmarks based on the clothing type and features surrounding the landmarks to get a more ingrained view of the design. We compare this method with other models most of which use the whole image as inputs and show the superiority of the model which gives importance to the crucial parts of the images. For the blouses sub-set from of the Deep Fashion dataset[1], we get an 16% increase in the accuracy for the top 3, 14% in top 5 and 11% top 10 retrieval results using the keypoints extraction methods combined with whole images compared to whole images as inputs. We also observe that the clothes retrieved are more similar in terms or design as well as high level properties like sleeve sizes, folded v/s non-folded sleeves etc

    A region-based image caption generator with refined descriptions

    Get PDF
    Describing the content of an image is a challenging task. To enable detailed description, it requires the detection and recognition of objects, people, relationships and associated attributes. Currently, the majority of the existing research relies on holistic techniques, which may lose details relating to important aspects in a scene. In order to deal with such a challenge, we propose a novel region-based deep learning architecture for image description generation. It employs a regional object detector, recurrent neural network (RNN)-based attribute prediction, and an encoder–decoder language generator embedded with two RNNs to produce refined and detailed descriptions of a given image. Most importantly, the proposed system focuses on a local based approach to further improve upon existing holistic methods, which relates specifically to image regions of people and objects in an image. Evaluated with the IAPR TC-12 dataset, the proposed system shows impressive performance and outperforms state-of-the-art methods using various evaluation metrics. In particular, the proposed system shows superiority over existing methods when dealing with cross-domain indoor scene images

    TOWARDS ATTRIBUTE-AWARE CROSS-DOMAIN IMAGE RETRIEVAL

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore