14,326 research outputs found

    Modeling Camera Effects to Improve Visual Learning from Synthetic Data

    Full text link
    Recent work has focused on generating synthetic imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling variation in the sensor domain. Sensor effects can degrade real images, limiting generalizability of network performance on visual tasks trained on synthetic data and tested in real environments. This paper proposes an efficient, automatic, physically-based augmentation pipeline to vary sensor effects --chromatic aberration, blur, exposure, noise, and color cast-- for synthetic imagery. In particular, this paper illustrates that augmenting synthetic training datasets with the proposed pipeline reduces the domain gap between synthetic and real domains for the task of object detection in urban driving scenes

    Factors affecting color correction of retroreflective markings

    Full text link
    A nighttime field study was conducted to assess the effects of retroreflective material area, distribution, and color on judgments of conspicuity. Participants, seated in a stationary vehicle, took part in a pairwise comparison of the stimuli. The independent variables included retroreflective power, area and distribution of the retroreflective material, color of the retroreflective material, participant age, and participant gender. The results indicate that color (white, fluorescent yellow-green, and fluorescent red-orange) was a significant factor in the judgment of conspicuity, as might be predicted from the Helmholtz-Kohlrausch effect. In addition, color interacted with the distribution of material at the high level of retroreflective power. The area of the retroreflective material was also significant. The present study, in agreement with a number of previous studies, indicates that color influences the conspicuity of retroreflective stimuli, but that the results are not always in agreement with color correction factors prescribed in ASTM E 1501. The discrepancy between empirically derived color correction factors seems to be attributable to an interaction of the stimulus size (subtended angle) and color, which previous studies have not extensively examined. To a lesser degree, the retroreflective power of a material also appears to influence conspicuity. While the ASTM correction factors may be appropriate for intermediate subtended solid angles, particularly for nonsaturated colors, smaller correction factors appear appropriate for markings subtending small angles (approaching point sources), and larger factors for larger subtended angles of saturated stimuli.The University of Michigan Industry Affiliation Program for Human Factors in Transportation Safetyhttp://deepblue.lib.umich.edu/bitstream/2027.42/91263/1/102869.pd

    Learning Matchable Image Transformations for Long-term Metric Visual Localization

    Full text link
    Long-term metric self-localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems due to appearance changes caused by lighting, weather, or seasonal variations. While experience-based mapping has proven to be an effective technique for bridging the `appearance gap,' the number of experiences required for reliable metric localization over days or months can be very large, and methods for reducing the necessary number of experiences are needed for this approach to scale. Taking inspiration from color constancy theory, we learn a nonlinear RGB-to-grayscale mapping that explicitly maximizes the number of inlier feature matches for images captured under different lighting and weather conditions, and use it as a pre-processing step in a conventional single-experience localization pipeline to improve its robustness to appearance change. We train this mapping by approximating the target non-differentiable localization pipeline with a deep neural network, and find that incorporating a learned low-dimensional context feature can further improve cross-appearance feature matching. Using synthetic and real-world datasets, we demonstrate substantial improvements in localization performance across day-night cycles, enabling continuous metric localization over a 30-hour period using a single mapping experience, and allowing experience-based localization to scale to long deployments with dramatically reduced data requirements.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the IEEE International Conference on Robotics and Automation (ICRA'20), Paris, France, May 31-June 4, 202
    • …
    corecore