1,256 research outputs found

    Ensemble of Different Approaches for a Reliable Person Re-identification System

    Get PDF
    An ensemble of approaches for reliable person re-identification is proposed in this paper. The proposed ensemble is built combining widely used person re-identification systems using different color spaces and some variants of state-of-the-art approaches that are proposed in this paper. Different descriptors are tested, and both texture and color features are extracted from the images; then the different descriptors are compared using different distance measures (e.g., the Euclidean distance, angle, and the Jeffrey distance). To improve performance, a method based on skeleton detection, extracted from the depth map, is also applied when the depth map is available. The proposed ensemble is validated on three widely used datasets (CAVIAR4REID, IAS, and VIPeR), keeping the same parameter set of each approach constant across all tests to avoid overfitting and to demonstrate that the proposed system can be considered a general-purpose person re-identification system. Our experimental results show that the proposed system offers significant improvements over baseline approaches. The source code used for the approaches tested in this paper will be available at https://www.dei.unipd.it/node/2357 and http://robotics.dei.unipd.it/reid/

    Color Spatial Arrangement for Image Retrieval by Visual Similarity

    Get PDF

    Colour texture classification from colour filter array images using various colour spaces

    Get PDF
    International audienceThis paper focuses on the classification of colour textures acquired by single-sensor colour cameras. In such cameras, the Colour Filter Array (CFA) makes each photosensor sensitive to only one colour component, and CFA images must be demosaiced to estimate the final colour images. We show that demosaicing is detrimental to the textural information because it affects colour texture descriptors such as Chromatic Co-occurrence Matrices (CCMs). However, it remains desirable to take advantage of the chromatic information for colour texture classification. This information is incompletely defined in CFA images, in which each pixel is associated to one single colour component. It is hence a challenge to extract standard colour texture descriptors from CFA images without demosaicing. We propose to form a pair of quarter-size colour images directly from CFA images without any estimation, then to compute the CCMs of these quarter-size images. This allows us to compare textures by means of their CCM-based similarity in texture classification or retrieval schemes, with still the ability to use different colour spaces. Experimental results achieved on benchmark colour texture databases show the effectiveness of the proposed approach for texture classification, and a complexity study highlights its computational efficiency

    Forward Global Photometric Calibration of the Dark Energy Survey

    Get PDF
    Many scientific goals for the Dark Energy Survey (DES) require calibration of optical/NIR broadband b=grizYb = grizY photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a "Forward Global Calibration Method (FGCM)" for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broad-band survey imaging itself and models of the instrument and atmosphere to estimate the spatial- and time-dependence of the passbands of individual DES survey exposures. "Standard" passbands are chosen that are typical of the passbands encountered during the survey. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude mbstdm_b^{\mathrm{std}} in the standard system. This "chromatic correction" to the standard system is necessary to achieve sub-percent calibrations. The FGCM achieves reproducible and stable photometric calibration of standard magnitudes mbstdm_b^{\mathrm{std}} of stellar sources over the multi-year Y3A1 data sample with residual random calibration errors of σ=5−6 mmag\sigma=5-6\,\mathrm{mmag} per exposure. The accuracy of the calibration is uniform across the 5000 deg25000\,\mathrm{deg}^2 DES footprint to within σ=7 mmag\sigma=7\,\mathrm{mmag}. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than 5 mmag5\,\mathrm{mmag} for main sequence stars with 0.5<g−i<3.00.5<g-i<3.0.Comment: 25 pages, submitted to A

    Advanced content-based semantic scene analysis and information retrieval: the SCHEMA project

    Get PDF
    The aim of the SCHEMA Network of Excellence is to bring together a critical mass of universities, research centers, industrial partners and end users, in order to design a reference system for content-based semantic scene analysis, interpretation and understanding. Relevant research areas include: content-based multimedia analysis and automatic annotation of semantic multimedia content, combined textual and multimedia information retrieval, semantic -web, MPEG-7 and MPEG-21 standards, user interfaces and human factors. In this paper, recent advances in content-based analysis, indexing and retrieval of digital media within the SCHEMA Network are presented. These advances will be integrated in the SCHEMA module-based, expandable reference system

    Human-Centered Content-Based Image Retrieval

    Get PDF
    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image Retrieval (CBIR). In contrast with most purely technological approaches, the thesis Human-Centered Content-Based Image Retrieval approaches the problem from a human/user centered perspective. Psychophysical experiments were conducted in which people were asked to categorize colors. The data gathered from these experiments was fed to a Fast Exact Euclidean Distance (FEED) transform (Schouten & Van den Broek, 2004), which enabled the segmentation of color space based on human perception (Van den Broek et al., 2008). This unique color space segementation was exploited for texture analysis and image segmentation, and subsequently for full-featured CBIR. In addition, a unique CBIR-benchmark was developed (Van den Broek et al., 2004, 2005). This benchmark was used to explore what and how several parameters (e.g., color and distance measures) of the CBIR process influence retrieval results. In contrast with other research, users judgements were assigned as metric. The online IR and CBIR system Multimedia for Art Retrieval (M4ART) (URL: http://www.m4art.org) has been (partly) founded on the techniques discussed in this thesis. References: - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2004). The utilization of human color categorization for content-based image retrieval. Proceedings of SPIE (Human Vision and Electronic Imaging), 5292, 351-362. [see also Chapter 7] - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2005). Content-Based Image Retrieval Benchmarking: Utilizing Color Categories and Color Distributions. Journal of Imaging Science and Technology, 49(3), 293-301. [see also Chapter 8] - Broek, E.L. van den, Schouten, Th.E., and Kisters, P.M.F. (2008). Modeling Human Color Categorization. Pattern Recognition Letters, 29(8), 1136-1144. [see also Chapter 5] - Schouten, Th.E. and Broek, E.L. van den (2004). Fast Exact Euclidean Distance (FEED) transformation. In J. Kittler, M. Petrou, and M. Nixon (Eds.), Proceedings of the 17th IEEE International Conference on Pattern Recognition (ICPR 2004), Vol 3, p. 594-597. August 23-26, Cambridge - United Kingdom. [see also Appendix C

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201
    • 

    corecore