6,587 research outputs found

    3D Tracking Using Multi-view Based Particle Filters

    Get PDF
    Visual surveillance and monitoring of indoor environments using multiple cameras has become a field of great activity in computer vision. Usual 3D tracking and positioning systems rely on several independent 2D tracking modules applied over individual camera streams, fused using geometrical relationships across cameras. As 2D tracking systems suffer inherent difficulties due to point of view limitations (perceptually similar foreground and background regions causing fragmentation of moving objects, occlusions), 3D tracking based on partially erroneous 2D tracks are likely to fail when handling multiple-people interaction. To overcome this problem, this paper proposes a Bayesian framework for combining 2D low-level cues from multiple cameras directly into the 3D world through 3D Particle Filters. This method allows to estimate the probability of a certain volume being occupied by a moving object, and thus to segment and track multiple people across the monitored area. The proposed method is developed on the basis of simple, binary 2D moving region segmentation on each camera, considered as different state observations. In addition, the method is proved well suited for integrating additional 2D low-level cues to increase system robustness to occlusions: in this line, a naĂŻve color-based (HSI) appearance model has been integrated, resulting in clear performance improvements when dealing with complex scenarios

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Color Constancy Convolutional Autoencoder

    Full text link
    In this paper, we study the importance of pre-training for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.Comment: 6 pages, 1 figure, 3 table

    Camera characterization for improving color archaeological documentation

    Full text link
    [EN] Determining the correct color is essential for proper cultural heritage documentation and cataloging. However, the methodology used in most cases limits the results since it is based either on perceptual procedures or on the application of color profiles in digital processing software. The objective of this study is to establish a rigorous procedure, from the colorimetric point of view, for the characterization of cameras, following different polynomial models. Once the camera is characterized, users obtain output images in the sRGB space that is independent of the sensor of the camera. In this article we report on pyColorimetry software that was developed and tested taking into account the recommendations of the Commission Internationale de l’Eclairage (CIE). This software allows users to control the entire digital image processing and the colorimetric data workflow, including the rigorous processing of raw data. We applied the methodology on a picture targeting Levantine rock art motifs in Remigia Cave (Spain) that is considered part of a UNESCO World Heritage Site. Three polynomial models were tested for the transformation between color spaces. The outcomes obtained were satisfactory and promising, especially with RAW files. The best results were obtained with a second-order polynomial model, achieving residuals below three CIELAB units. We highlight several factors that must be taken into account, such as the geometry of the shot and the light conditions, which are determining factors for the correct characterization of a digital camera.The authors gratefully acknowledge the support from the Spanish Ministerio de Economia y Competitividad to the project HAR2014-59873-R. The authors would like also to acknowledge the comments from the colleagues at the Photogrammetry & Laser Scanning Research Group (GIFLE) and the fruitful discussions provided by Archaeologist Dr. Esther Lopez-Montalvo.Molada Tebar, A.; Lerma GarcĂ­a, JL.; MarquĂ©s Mateu, Á. (2017). Camera characterization for improving color archaeological documentation. Color Research and Application. 43(1):47-57. https://doi.org/10.1002/col.22152S475743

    Evaluation of changes in image appearance with changes in displayed image size

    Get PDF
    This research focused on the quantification of changes in image appearance when images are displayed at different image sizes on LCD devices. The final results provided in calibrated Just Noticeable Differences (JNDs) on relevant perceptual scales, allowing the prediction of sharpness and contrast appearance with changes in the displayed image size. A series of psychophysical experiments were conducted to enable appearance predictions. Firstly, a rank order experiment was carried out to identify the image attributes that were most affected by changes in displayed image size. Two digital cameras, exhibiting very different reproduction qualities, were employed to capture the same scenes, for the investigation of the effect of the original image quality on image appearance changes. A wide range of scenes with different scene properties was used as a test-set for the investigation of image appearance changes with scene type. The outcomes indicated that sharpness and contrast were the most important attributes for the majority of scene types and original image qualities. Appearance matching experiments were further conducted to quantify changes in perceived sharpness and contrast with respect to changes in the displayed image size. For the creation of sharpness matching stimuli, a set of frequency domain filters were designed to provide equal intervals in image quality, by taking into account the system’s Spatial Frequency Response (SFR) and the observation distance. For the creation of contrast matching stimuli, a series of spatial domain S-shaped filters were designed to provide equal intervals in image contrast, by gamma adjustments. Five displayed image sizes were investigated. Observers were always asked to match the appearance of the smaller version of each stimulus to its larger reference. Lastly, rating experiments were conducted to validate the derived JNDs in perceptual quality for both sharpness and contrast stimuli. Data obtained by these experiments finally converted into JND scales for each individual image attribute. Linear functions were fitted to the final data, which allowed the prediction of image appearance of images viewed at larger sizes than these investigated in this research

    Rank-based camera spectral sensitivity estimation

    Get PDF
    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab—a difficult and lengthy procedure—or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have “raw mode.” Experiments validate our method

    On the practical nature of artificial qualia

    Get PDF
    Proceeding of: 2010 Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2010), Leicester, UK, 29 March - 1 April, 2010.Can machines ever have qualia? Can we build robots with inner worlds of subjective experience? Will qualia experienced by robots be comparable to subjective human experience? Is the young field of Machine Consciousness (MC) ready to answer these questions? In this paper, rather than trying to answer these questions directly, we argue that a formal definition, or at least a functional characterization, of artificial qualia is required in order to establish valid engineering principles for synthetic phenomenology (SP). Understanding what might be the differences, if any, between natural and artificial qualia is one of the first questions to be answered. Furthermore, if an interim and less ambitious definition of artificial qualia can be outlined, the corresponding model can be implemented and used to shed some light on the very nature of consciousness.1In this work we explore current trends in MC and SP from the perspective of artificial qualia, attempting to identify key features that could contribute to a practical characterization of this concept. We focus specifically on potential implementations of artificial qualia as a means to provide a new interdisciplinary tool for research on natural and artificial cognition.This work was supported in part by the Spanish Ministry of Education under CICYT grant TRA2007-67374-C02-02.Publicad

    Color-based 3D particle filtering for robust tracking in heterogeneous environments

    Full text link
    Most multi-camera 3D tracking and positioning systems rely on several independent 2D tracking modules applied over individual camera streams, fused using both geometrical relationships across cameras and/or observed appearance of objects. However, 2D tracking systems suffer inherent difficulties due to point of view limitations (perceptually similar foreground and background regions causing fragmentation of moving objects, occlusions, etc.) and, therefore, 3D tracking based on partially erroneous 2D tracks are likely to fail when handling multiple-people interaction. In this paper, we propose a Bayesian framework for combining 2D low-level cues from multiple cameras directly into the 3D world through 3D Particle Filters. This novel method (direct 3D operation) allows the estimation of the probability of a certain volume being occupied by a moving object, using 2D motion detection and color features as state observations of the Particle Filter framework. For this purpose, an efficient color descriptor has been implemented, which automatically adapts itself to image noise, proving able to deal with changes in illumination and shape variations. The ability of the proposed framework to correctly track multiple 3D objects over time is tested on a real indoor scenario, showing satisfactory results

    Evaluation and improvement of the workflow of digital imaging of fine art reproduction in museums

    Get PDF
    Fine arts refer to a broad spectrum of art formats, ie~painting, calligraphy, photography, architecture, and so forth. Fine art reproductions are to create surrogates of the original artwork that are able to faithfully deliver the aesthetics and feelings of the original. Traditionally, reproductions of fine art are made in the form of catalogs, postcards or books by museums, libraries, archives, and so on (hereafter called museums for simplicity). With the widespread adoption of digital archiving in museums, more and more artwork is reproduced to be viewed on a display. For example, artwork collections are made available through museum websites and Google Art Project for art lovers to view on their own displays. In the thesis, we study the fine art reproduction of paintings in the form of soft copy viewed on displays by answering four questions: (1) what is the impact of the viewing condition and original on image quality evaluation? (2) can image quality be improved by avoiding visual editing in current workflows of fine art reproduction? (3) can lightweight spectral imaging be used for fine art reproduction? and (4) what is the performance of spectral reproductions compared with reproductions by current workflows? We started with evaluating the perceived image quality of fine art reproduction created by representative museums in the United States under controlled and uncontrolled environments with and without the presence of the original artwork. The experimental results suggest that the image quality is highly correlated with the color accuracy of the reproduction only when the original is present and the reproduction is evaluated on a characterized display. We then examined the workflows to create these reproductions, and found that current workflows rely heavily on visual editing and retouching (global and local color adjustments on the digital reproduction) to improve the color accuracy of the reproduction. Visual editing and retouching can be both time-consuming and subjective in nature (depending on experts\u27 own experience and understanding of the artwork) lowering the efficiency of artwork digitization considerably. We therefore propose to improve the workflow of fine art reproduction by (1) automating the process of visual editing and retouching in current workflows based on RGB acquisition systems and by (2) recovering the spectral reflectance of the painting with off-the-shelf equipment under commonly available lighting conditions. Finally, we studied the perceived image quality of reproductions created by current three-channel (RGB) workflows with those by spectral imaging and those based on an exemplar-based method
    • 

    corecore