449,975 research outputs found

    CED: Color Event Camera Dataset

    Full text link
    Event cameras are novel, bio-inspired visual sensors, whose pixels output asynchronous and independent timestamped spikes at local intensity changes, called 'events'. Event cameras offer advantages over conventional frame-based cameras in terms of latency, high dynamic range (HDR) and temporal resolution. Until recently, event cameras have been limited to outputting events in the intensity channel, however, recent advances have resulted in the development of color event cameras, such as the Color-DAVIS346. In this work, we present and release the first Color Event Camera Dataset (CED), containing 50 minutes of footage with both color frames and events. CED features a wide variety of indoor and outdoor scenes, which we hope will help drive forward event-based vision research. We also present an extension of the event camera simulator ESIM that enables simulation of color events. Finally, we present an evaluation of three state-of-the-art image reconstruction methods that can be used to convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop

    Region-based Skin Color Detection.

    Get PDF
    Skin color provides a powerful cue for complex computer vision applications. Although skin color detection has been an active research area for decades, the mainstream technology is based on the individual pixels. This paper presents a new region-based technique for skin color detection which outperforms the current state-of-the-art pixel-based skin color detection method on the popular Compaq dataset (Jones and Rehg, 2002). Color and spatial distance based clustering technique is used to extract the regions from the images, also known as superpixels. In the first step, our technique uses the state-of-the-art non-parametric pixel-based skin color classifier (Jones and Rehg, 2002) which we call the basic skin color classifier. The pixel-based skin color evidence is then aggregated to classify the superpixels. Finally, the Conditional Random Field (CRF) is applied to further improve the results. As CRF operates over superpixels, the computational overhead is minimal. Our technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq dataset tested over approximately 14,000 web images

    Assessment of VINO filters for correcting redgreen Color Vision Deficiency

    Get PDF
    In our ongoing research on the effectiveness of different passive tools for aiding Color Vision Deficiency (CVD) subjects, we have analyzed the VINO 02 Amp Oxy-Iso glasses using two strategies: 1) 52 observers were studied using four color tests (recognition, arrangement, discrimination, and color-naming); 2) the spectral transmittance of the lenses were used to model the color appearance of natural scenes for different simulated CVD subjects. We have also compared VINO and EnChroma glasses. The spectral transmission of the VINO glasses significantly changed color appearance. This change would allow some CVD subjects, above all the deutan ones, to be able to pass recognition tests but not the arrangement tests. To sum up, our results support the hypothesis that glasses with filters are unable to effectively resolve the problems related to color vision deficiency.The Spanish State Agency of Research (AEI); the Ministry for Economy, Industry and Competitiveness (MIMECO) (Grant numbers FIS2017-89258-P and DPI 2015-64571-R); European Union FEDER (European Regional Development Funds)

    Combined object recognition approaches for mobile robotics

    Get PDF
    There are numerous solutions to simple object recognition problems when the machine is operating under strict environmental conditions (such as lighting). Object recognition in real-world environments poses greater difficulty however. Ideally mobile robots will function in real-world environments without the aid of fiduciary identifiers. More robust methods are therefore needed to perform object recognition reliably. A combined approach of multiple techniques improves recognition results. Active vision and peripheral-foveal vision—systems that are designed to improve the information gathered for the purposes of object recognition—are examined. In addition to active vision and peripheral-foveal vision, five object recognition methods that either make use of some form of active vision or could leverage active vision and/or peripheral-foveal vision systems are also investigated: affine-invariant image patches, perceptual organization, 3D morphable models (3DMMs), active viewpoint, and adaptive color segmentation. The current state-of-the-art in these areas of vision research and observations on areas of future research are presented. Examples of state-of-theart methods employed in other vision applications that have not been used for object recognition are also mentioned. Lastly, the future direction of the research field is hypothesized

    Mechanisms of vision in the fruit fly

    Get PDF
    Vision is essential to maximize the efficiency of daily tasks such as feeding, avoiding predators or finding mating partners. An advantageous model is Drosophila melanogaster, since it offers tools that allow genetic and neuronal manipulation with high spatial and temporal resolution, which can be combined with behavioral, anatomical and physiological assays. Recent advances have expanded our knowledge on the neural circuitry underlying such important behaviors as color vision (role of reciprocal inhibition to enhance color signal at the level of the ommatidia); motion vision (motion-detection neurones receive both excitatory and inhibitory input), and sensory processing (role of the central complex in spatial navigation, and in orchestrating the information from other senses and the inner state). Research on synergies between pathways is shaping the field

    Peripheral vision displays: The future

    Get PDF
    Several areas of research relating to peripheral vision displays used by aircraft pilots are outlined: fiber optics, display color, and holography. Various capacities and specifications of gas and solid state lasers are enumerated. These lasers are potential sources of green light for the peripheral vision displays. The relative radiance required for rod and cone vision at different wavelengths is presented graphically. Calculated and measured retinal sensitivities (foveal and peripheral) are given for wavelength produced by various lasers

    Hierarchical visual perception and two-dimensional compressive sensing for effective content-based color image retrieval

    Get PDF
    Content-based image retrieval (CBIR) has been an active research theme in the computer vision community for over two decades. While the field is relatively mature, significant research is still required in this area to develop solutions for practical applications. One reason that practical solutions have not yet been realized could be due to a limited understanding of the cognitive aspects of the human vision system. Inspired by three cognitive properties of human vision, namely, hierarchical structuring, color perception and embedded compressive sensing, a new CBIR approach is proposed. In the proposed approach, the Hue, Saturation and Value (HSV) color model and the Similar Gray Level Co-occurrence Matrix (SGLCM) texture descriptors are used to generate elementary features. These features then form a hierarchical representation of the data to which a two-dimensional compressive sensing (2D CS) feature mining algorithm is applied. Finally, a weighted feature matching method is used to perform image retrieval. We present a comprehensive set of results of applying our proposed Hierarchical Visual Perception Enabled 2D CS approach using publicly available datasets and demonstrate the efficacy of our techniques when compared with other recently published, state-of-the-art approaches

    Boundary Contour System and Feature Contour System

    Full text link
    When humans gaze upon a scene, our brains rapidly combine several different types of locally ambiguous visual information to generate a globally consistent and unambiguous representation of Form-And-Color-And-DEpth, or FACADE. This state of affairs raises the question: What new computational principles and mechanisms are needed to understand how multiple sources of visual information cooperate automatically to generate a percept of 3-dimensional form? This chapter reviews some modeling work aimed at developing such a general-purpose vision architecture. This architecture clarifies how scenic data about boundaries, textures, shading, depth, multiple spatial scales, and motion can be cooperatively synthesized in real-time into a coherent representation of 3-dimensional form. It embodies a new vision theory that attempts to clarify the functional organzation of the visual brain from the lateral geniculate nucleus (LGN) to the extrastriate cortical regions V4 and MT. Moreover, the same processes which are useful towards explaining how the visual cortex processes retinal signals are equally valuable for processing noisy multidimensional data from artificial sensors, such as synthetic aperture radar, laser radar, multispectral infrared, magnetic resonance, and high-altitude photographs. These processes generate 3-D boundary and surface representations of a scene.Office of Naval Research (N00011-95-I-0409, N00014-95-I-0657

    ABANICCO: A New Color Space for Multi-Label Pixel Classification and Color Analysis

    Get PDF
    Classifying pixels according to color, and segmenting the respective areas, are necessary steps in any computer vision task that involves color images. The gap between human color perception, linguistic color terminology, and digital representation are the main challenges for developing methods that properly classify pixels based on color. To address these challenges, we propose a novel method combining geometric analysis, color theory, fuzzy color theory, and multi-label systems for the automatic classification of pixels into 12 conventional color categories, and the subsequent accurate description of each of the detected colors. This method presents a robust, unsupervised, and unbiased strategy for color naming, based on statistics and color theory. The proposed model, "ABANICCO" (AB ANgular Illustrative Classification of COlor), was evaluated through different experiments: its color detection, classification, and naming performance were assessed against the standardized ISCC-NBS color system; its usefulness for image segmentation was tested against state-of-the-art methods. This empirical evaluation provided evidence of ABANICCO's accuracy in color analysis, showing how our proposed model offers a standardized, reliable, and understandable alternative for color naming that is recognizable by both humans and machines. Hence, ABANICCO can serve as a foundation for successfully addressing a myriad of challenges in various areas of computer vision, such as region characterization, histopathology analysis, fire detection, product quality prediction, object description, and hyperspectral imaging.This research was funded by the Ministerio de Ciencia, Innovacción y Universidades, Agencia Estatal de Investigación, under grant PID2019-109820RB, MCIN/AEI/10.13039/501100011033 co-financed by the European Regional Development Fund (ERDF) "A way of making Europe" to A.M.-B. and L.N.-S.Publicad

    Spectral Filter Selection for Increasing Chromatic Diversity in CVD Subjects

    Get PDF
    We are grateful to Angela Tate for revising the English text. We are also grateful to the reviewers for their insightful suggestions.This paper analyzes, through computational simulations, which spectral filters increase the number of discernible colors (NODC) of subjects with normal color vision, as well as red–green anomalous trichromats and dichromats. The filters are selected from a set of filters in which we have modeled spectral transmittances. With the selected filters we have carried out simulations performed using the spectral reflectances captured either by a hyperspectral camera or by a spectrometer. We have also studied the effects of these filters on color coordinates. Finally, we have simulated the results of two widely used color blindness tests: Ishihara and Farnsworth–Munsell 100 Hue (FM100). In these analyses the selected filters are compared with the commercial filters from EnChroma and VINO companies. The results show that the increase in NODC with the selected filters is not relevant. The simulation results show that none of these chosen filters help color vision deficiency (CVD) subjects to pass the set of color blindness tests studied. These results obtained using standard colorimetry support the hypothesis that the use of color filters does not cause CVDs to have a perception similar to that of a normal observer.This research was supported by the Spanish State Agency for Research (AEI) and the Ministry for Economy, Industry and Competitiveness (MIMECO) by means of grant number FIS2017-89258-P with European Union FEDER (European Regional Development Funds) support, and by the Spanish Ministry of Science, Innovation, and Universities, with support from the European Regional Development Funds under grant number RTI2018-094738-B-I00
    corecore