155 research outputs found
Multi-modal joint embedding for fashion product retrieval
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Finding a product in the fashion world can be a daunting task. Everyday, e-commerce sites are updating with thousands of images and their associated metadata (textual information), deepening the problem, akin to finding a needle in a haystack. In this paper, we leverage both the images and textual meta-data and propose a joint multi-modal embedding that maps both the text and images into a common latent space. Distances in the latent space correspond to similarity between products, allowing us to effectively perform retrieval in this latent space, which is both efficient and accurate. We train this embedding using large-scale real world e-commerce data by both minimizing the similarity between related products and using auxiliary classification networks to that encourage the embedding to have semantic meaning. We compare against existing approaches and show significant improvements in retrieval tasks on a large-scale e-commerce dataset. We also provide an analysis of the different metadata.Peer ReviewedPostprint (author's final draft
Multi-modal fashion product retrieval
Finding a product in the fashion world can be a daunting task. Everyday, e-commerce sites are updating with thousands of images and their associated metadata (textual information), deepening the problem. In this paper, we leverage both the images and textual metadata and propose a joint multi-modal embedding that maps both the text and images into a common latent space. Distances in the latent space correspond to similarity between products, allowing us to effectively perform retrieval in this latent space. We compare against existing approaches and show significant improvements in retrieval tasks on a largescale e-commerce dataset.Peer ReviewedPostprint (author's final draft
Multi-modal embedding for main product detection in fashion
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Best Paper Award a la 2017 IEEE International Conference on Computer Vision WorkshopsWe present an approach to detect the main product in fashion images by exploiting the textual metadata associated with each image. Our approach is based on a Convolutional Neural Network and learns a joint embedding of object proposals and textual metadata to predict the main product in the image. We additionally use several complementary classification and overlap losses in order to improve training stability and performance. Our tests on a large-scale dataset taken from eight e-commerce sites show that our approach outperforms strong baselines and is able to accurately detect the main product in a wide diversity of challenging fashion images.Peer ReviewedAward-winningPostprint (author's final draft
BASS: boundary-aware superpixel segmentation
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We propose a new superpixel algorithm based on exploiting the boundary information of an image, as objects in images can generally be described by their boundaries. Our proposed approach initially estimates the boundaries and uses them to place superpixel seeds in the areas in which they are more dense. Afterwards, we minimize an energy function in order to expand the seeds into full superpixels. In addition to standard terms such as color consistency and compactness, we propose using the geodesic distance which concentrates small superpixels in regions of the image with more information, while letting larger superpixels cover more homogeneous regions. By both improving the initialization using the boundaries and coherency of the superpixels with geodesic distances, we are able to maintain the coherency of the image structure with fewer superpixels than other approaches. We show the resulting algorithm to yield smaller Variation of Information metrics in seven different datasets while maintaining Undersegmentation Error values similar to the state-of-the-art methods.Peer ReviewedPostprint (author's final draft
An ideal mass assignment scheme for measuring the Power Spectrum with FFTs
In measuring the power spectrum of the distribution of large numbers of dark
matter particles in simulations, or galaxies in observations, one has to use
Fast Fourier Transforms (FFT) for calculational efficiency. However, because of
the required mass assignment onto grid points in this method, the measured
power spectrum \la |\delta^f(k)|^2\ra obtained with an FFT is not the true
power spectrum but instead one that is convolved with a window function
in Fourier space. In a recent paper, Jing (2005) proposed an
elegant algorithm to deconvolve the sampling effects of the window function and
to extract the true power spectrum, and tests using N-body simulations show
that this algorithm works very well for the three most commonly used mass
assignment functions, i.e., the Nearest Grid Point (NGP), the Cloud In Cell
(CIC) and the Triangular Shaped Cloud (TSC) methods. In this paper, rather than
trying to deconvolve the sampling effects of the window function, we propose to
select a particular function in performing the mass assignment that can
minimize these effects. An ideal window function should fulfill the following
criteria: (i) compact top-hat like support in Fourier space to minimize the
sampling effects; (ii) compact support in real space to allow a fast and
computationally feasible mass assignment onto grids. We find that the scale
functions of Daubechies wavelet transformations are good candidates for such a
purpose. Our tests using data from the Millennium Simulation show that the true
power spectrum of dark matter can be accurately measured at a level better than
2% up to , without applying any deconvolution processes. The new
scheme is especially valuable for measurements of higher order statistics, e.g.
the bi-spectrum,........Comment: 17 pages, 3 figures, Accepted for publication in ApJ,Matches the
accepte
Controlled Interfacial Reactions and Superior Mechanical Properties of High Energy Ball Milled/Spark Plasma Sintered Ti–6Al–4V–Graphene Composite
Ball milling process has become one of the effective methods for dispersing graphene nanoplates (GNPs) uniformly into matrix; however, there are often serious issues of structural integrity and interfacial reactions of GNPs with matrix. Herein, GNPs/Ti‐6Al‐4V (GNPs/TC4) composites are synthesized using high energy ball milling (HEBM) and spark plasma sintering. Effects of ball milling on microstructural evolution and interfacial reactions of GNPs/TC4 composite powders during HEBM are investigated. As ball milling time increase, particles size of TC4 is first increased (e.g., ≈104.15 μm, 5 h), but then decreased to ≈1.5 μm (15 h), which is much smaller than that of original TC4 powders (≈86.8 μm). TiC phases are in situ formed on the surfaces of TC4 particles when ball milling time is 10Thinsp;h. GNPs/TC4 composites exhibit 36–103% increase in compressive yield strength and 57–78% increase in hardness than those of TC4 alloy, whereas the ductility is reduced from 28% to 7% with an increase of ball milling time (from 2 to 15 h). A good balance between high strength (1.9 GPa) and ductility (17%) of GNPs/TC4 composites is achieved when the ball milling time is 10 h, attributing to the synergistic effects of grain refinement strengthening, solid solution strengthening, and load transfer strengthening from GNPs and in situ formed TiC
N2O isotopocule measurements using laser spectroscopy:analyzer characterization and intercomparison
For the past two decades, the measurement of nitrous oxide (N2O) isotopocules – isotopically substituted molecules 14N15N16O, 15N14N16O and 14N14N18O of the main isotopic species 14N14N16O – has been a promising technique for understanding N2O production and consumption pathways. The coupling of non-cryogenic and tuneable light sources with different detection schemes, such as direct absorption quantum cascade laser absorption spectroscopy (QCLAS), cavity ring-down spectroscopy (CRDS) and off-axis integrated cavity output spectroscopy (OA-ICOS), has enabled the production of commercially available and field-deployable N2O isotopic analyzers. In contrast to traditional isotope-ratio mass spectrometry (IRMS), these instruments are inherently selective for position-specific 15N substitution and provide real-time data, with minimal or no sample pretreatment, which is highly attractive for process studies.
Here, we compared the performance of N2O isotope laser spectrometers with the three most common detection schemes: OA-ICOS (N2OIA-30e-EP, ABB – Los Gatos Research Inc.), CRDS (G5131-i, Picarro Inc.) and QCLAS (dual QCLAS and preconcentration, trace gas extractor (TREX)-mini QCLAS, Aerodyne Research Inc.). For each instrument, the precision, drift and repeatability of N2O mole fraction [N2O] and isotope data were tested. The analyzers were then characterized for their dependence on [N2O], gas matrix composition (O2, Ar) and spectral interferences caused by H2O, CO2, CH4 and CO to develop analyzer-specific correction functions. Subsequently, a simulated two-end-member mixing experiment was used to compare the accuracy and repeatability of corrected and calibrated isotope measurements that could be acquired using the different laser spectrometers.
Our results show that N2O isotope laser spectrometer performance is governed by an interplay between instrumental precision, drift, matrix effects and spectral interferences. To retrieve compatible and accurate results, it is necessary to include appropriate reference materials following the identical treatment (IT) principle during every measurement. Remaining differences between sample and reference gas compositions have to be corrected by applying analyzer-specific correction algorithms. These matrix and trace gas correction equations vary considerably according to N2O mole fraction, complicating the procedure further. Thus, researchers should strive to minimize differences in composition between sample and reference gases. In closing, we provide a calibration workflow to guide researchers in the operation of N2O isotope laser spectrometers in order to acquire accurate N2O isotope analyses. We anticipate that this workflow will assist in applications where matrix and trace gas compositions vary considerably (e.g., laboratory incubations, N2O liberated from wastewater or groundwater), as well as extend to future analyzer models and instruments focusing on isotopic species of other molecules.ISSN:1867-1381ISSN:1867-854
- …