27,980 research outputs found

    High-end fashion manufacturing in the UK - product, process and vision. Recommendations for education, training and accreditation

    Full text link
    The Centre for Fashion Enterprise (CFE) was commissioned by the Department of Culture, Media and Sport (DCMS) to undertake a feasibility study to explore fully the market need for a new high-end production hub. This was in direct response to the need highlighted in the DCMS report, Creative Britain - New Talents For The New Economy, published in 2008. In addition to finding a need for a sampling and innovation facility1 (outlined in a separate document), the study identified significant problems relating to education and skills training in the sector. Recommendations are given in this report as to how these might be addressed, as well as a recommendation for an accreditation scheme that would aim to raise production quality standards within the sector

    Large Scale Visual Recommendations From Street Fashion Images

    Full text link
    We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. Finally, we also outline a large-scale annotated data set of fashion images (Fashion-136K) that can be exploited for future vision research

    VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback

    Full text link
    Modern recommender systems model people and items by discovering or `teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.Comment: AAAI'1

    Acute changes in clinical breast measurements following bra removal:implications for surgical practice

    Get PDF
    AbstractBackgroundStable measurement of breast position is crucial for objective pre-operative planning and post-operative evaluation. In clinical practice, breast measures are often taken immediately following bra removal. However, research shows that restrictive clothing (such as a bra) can cause acute anatomical changes, leading to the hypothesis that clinical breast measures may change over time following bra removal. This cross-sectional observational study aimed to provide simple clinical guidelines for the measurement of breast position which account for any acute changes in breast position following bra removal.MethodsThirteen participants of varying breast sizes had markers attached to their thorax and nipples to determine clinical measures of sternal notch to nipple distance, internipple distance, breast projection, and vertical nipple position. The positions of these landmarks were recorded using a motion capture system during 10 min of controlled sitting following bra removal.ResultsInternipple distance and breast projection remained unchanged over 10 min, while the resultant sternal notch to nipple distance extended by 2.8 mm in 299 s (right) and 3.7 mm in 348 s (left). The greatest change occurred in the vertical nipple position, which migrated an average of 4.1 mm in 365 s (right) and 6.6 mm in 272 s (left), however, for one participant vertical migration was up to 20 mm.ConclusionsInternipple distance and breast projection can be measured first following bra removal, followed by sternal notch to nipple distance, any measures associated with the vertical nipple position should be made more than 6 min after bra removal. These guidelines have implications for breast surgery, particularly for unilateral reconstruction based on the residual breast position

    TransNFCM: Translation-Based Neural Fashion Compatibility Modeling

    Full text link
    Identifying mix-and-match relationships between fashion items is an urgent task in a fashion e-commerce recommender system. It will significantly enhance user experience and satisfaction. However, due to the challenges of inferring the rich yet complicated set of compatibility patterns in a large e-commerce corpus of fashion items, this task is still underexplored. Inspired by the recent advances in multi-relational knowledge representation learning and deep neural networks, this paper proposes a novel Translation-based Neural Fashion Compatibility Modeling (TransNFCM) framework, which jointly optimizes fashion item embeddings and category-specific complementary relations in a unified space via an end-to-end learning manner. TransNFCM places items in a unified embedding space where a category-specific relation (category-comp-category) is modeled as a vector translation operating on the embeddings of compatible items from the corresponding categories. By this way, we not only capture the specific notion of compatibility conditioned on a specific pair of complementary categories, but also preserve the global notion of compatibility. We also design a deep fashion item encoder which exploits the complementary characteristic of visual and textual features to represent the fashion products. To the best of our knowledge, this is the first work that uses category-specific complementary relations to model the category-aware compatibility between items in a translation-based embedding space. Extensive experiments demonstrate the effectiveness of TransNFCM over the state-of-the-arts on two real-world datasets.Comment: Accepted in AAAI 2019 conferenc
    corecore