368 research outputs found

    Driving in the Rain: A Survey toward Visibility Estimation through Windshields

    Get PDF
    Rain can significantly impair the driver’s sight and affect his performance when driving in wet conditions. Evaluation of driver visibility in harsh weather, such as rain, has garnered considerable research since the advent of autonomous vehicles and the emergence of intelligent transportation systems. In recent years, advances in computer vision and machine learning led to a significant number of new approaches to address this challenge. However, the literature is fragmented and should be reorganised and analysed to progress in this field. There is still no comprehensive survey article that summarises driver visibility methodologies, including classic and recent data-driven/model-driven approaches on the windshield in rainy conditions, and compares their generalisation performance fairly. Most ADAS and AD systems are based on object detection. Thus, rain visibility plays a key role in the efficiency of ADAS/AD functions used in semi- or fully autonomous driving. This study fills this gap by reviewing current state-of-the-art solutions in rain visibility estimation used to reconstruct the driver’s view for object detection-based autonomous driving. These solutions are classified as rain visibility estimation systems that work on (1) the perception components of the ADAS/AD function, (2) the control and other hardware components of the ADAS/AD function, and (3) the visualisation and other software components of the ADAS/AD function. Limitations and unsolved challenges are also highlighted for further research

    Exploring the limits of variational passive microwave retrievals

    Get PDF
    2017 Summer.Includes bibliographical references.Passive microwave observations from satellite platforms constitute one of the most important data records of the global observing system. Operational since the late 1970s, passive microwave data underpin climate records of precipitation, sea ice extent, water vapor, and more, and contribute significantly to numerical weather prediction via data assimilation. Detailed understanding of the observation errors in these data is key to maximizing their utility for research and operational applications alike. However, the treatment of observation errors in this data record has been lacking and somewhat divergent when considering the retrieval and data assimilation communities. In this study, some limits of passive microwave imager data are considered in light of more holistic treatment of observation errors. A variational retrieval, named the CSU 1DVAR, was developed for microwave imagers and applied to the GMI and AMSR2 sensors for ocean scenes. Via an innovative method to determine forward model error, this retrieval accounts for error covariances across all channels used in the iteration. This improves validation in more complex scenes such as high wind speed and persistently cloudy regimes. In addition, it validates on par with a benchmark dataset without any tuning to in-situ observations. The algorithm yields full posterior error diagnostics and its physical forward model is applicable to other sensors, pending intercalibration. This retrieval is used to explore the viability of retrieving parameters at the limits of the available information content from a typical microwave imager. Retrieval of warm rain, marginal sea ice, and falling snow are explored with the variational retrieval. Warm rain retrieval shows some promise, with greater sensitivity than operational GPM algorithms due to leveraging CloudSat data and accounting for drop size distribution variability. Marginal sea ice is also detected with greater sensitivity than a standard operational retrieval. These studies ultimately show that while a variational algorithm maximizes the effective signal to noise ratio of these observations, hard limitations exist due to the finite information content afforded by a typical microwave imager

    The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review

    Get PDF
    Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways and urban traffic in all weather conditions. The goal of this paper is to provide a survey of sensing technologies used to detect the surrounding environment and obstacles during driving maneuvers in different weather conditions. Firstly, some important historical milestones are presented. Secondly, the state-of-the-art automated driving applications (adaptive cruise control, pedestrian collision avoidance, etc.) are introduced with a focus on all-weather activity. Thirdly, the most involved sensor technologies (radar, lidar, ultrasonic, camera, and far-infrared) employed by automated driving applications are studied. Furthermore, the difference between the current and expected states of performance is determined by the use of spider charts. As a result, a fusion perspective is proposed that can fill gaps and increase the robustness of the perception system

    Advanced Sensors and Applications Study (ASAS)

    Get PDF
    The present EOD requirements for sensors in the space shuttle era are reported with emphasis on those applications which were deemed important enough to warrant separate sections. The application areas developed are: (1) agriculture; (2) atmospheric corrections; (3) cartography; (4) coastal studies; (5) forestry; (6) geology; (7) hydrology; (8) land use; (9) oceanography; and (10) soil moisture. For each application area. The following aspects were covered: (1) specific goals and techniques, (2) individual sensor requirements including types, bands, resolution, etc.; (3) definition of mission requirements, type orbits, coverages, etc.; and (4) discussion of anticipated problem areas and solutions. The remote sensors required for these application areas include; (1) camera systems; (2) multispectral scanners; (3) microwave scatterometers; (4) synthetic aperture radars; (5) microwave radiometers; and (6) vidicons. The emphasis in the remote sensor area was on the evaluation of present technology implications about future systems

    Airborne Forward-Looking Interferometer for the Detection of Terminal-Area Hazards

    Get PDF
    The Forward Looking Interferometer (FLI) program was a multi-year cooperative research effort to investigate the use of imaging radiometers with high spectral resolution, using both modeling/simulation and field experiments, along with sophisticated data analysis techniques that were originally developed for analysis of data from space-based radiometers and hyperspectral imagers. This investigation has advanced the state of knowledge in this technical area, and the FLI program developed a greatly improved understanding of the radiometric signal strength of aviation hazards in a wide range of scenarios, in addition to a much better understanding of the real-world functionality requirements for hazard detection instruments. The project conducted field experiments on three hazards (turbulence, runway conditions, and wake vortices) and analytical studies on several others including volcanic ash, reduced visibility conditions, in flight icing conditions, and volcanic ash

    Pedestrian Detection by Computer Vision.

    Get PDF
    This document describes work aimed at determining whether the detection, bycomputer vision, of pedestrians waiting at signal-controlled road crossings could bemade sufficiently reliable and affordable, using currently available technology, so asto be suitable for widespread use in traffic control systems.The work starts by examining the need for pedestrian detection in traffic controlsystems and then goes onto look at the specific problems of applying a vision systemto the detection task. The most important distinctive features of the pedestriandetection task addressed in this work are:• The operating conditions are an outdoor environment with no constraints onfactors such as variation in illumination, presence of shadows and the effects ofadverse weather.• Pedestrians may be moving or static and are not limited to certain orientations orto movement in a single direction.• The number of pedestrians to be monitored is not restricted such that the visionsystem must cope with the monitoring of multiple targets concurrently.• The background scene is complex and so contains image features that tend todistract a vision system from the successful detection of pedestrians.• Pedestrian attire is unconstrained so detection must occur even when details ofpedestrian shape are hidden by items such as coats and hats.• The camera's position is such that assumptions commonly used by vision systemsto avoid the effects of occlusion, perspective and viewpoint variation are not valid.•The implementation cost of the system, in moderate volumes, must be realistic forwidespread installation.A review of relevant prior art in computer vision with respect to the above demands ispresented. Thereafter techniques developed by the author to overcome thesedifficulties are developed and evaluated over an extensive test set of image sequencesrepresentative of the range of conditions found in the real world.The work has resulted in the development of a vision system which has been shown toattain a useful level of performance under a wide range of environmental andtransportation conditions. This was achieved, in real-time, using low-cost processingand sensor components so demonstrating the viability of developing the results of thiswork into a practical detector

    Neural models of inter-cortical networks in the primate visual system for navigation, attention, path perception, and static and kinetic figure-ground perception

    Full text link
    Vision provides the primary means by which many animals distinguish foreground objects from their background and coordinate locomotion through complex environments. The present thesis focuses on mechanisms within the visual system that afford figure-ground segregation and self-motion perception. These processes are modeled as emergent outcomes of dynamical interactions among neural populations in several brain areas. This dissertation specifies and simulates how border-ownership signals emerge in cortex, and how the medial superior temporal area (MSTd) represents path of travel and heading, in the presence of independently moving objects (IMOs). Neurons in visual cortex that signal border-ownership, the perception that a border belongs to a figure and not its background, have been identified but the underlying mechanisms have been unclear. A model is presented that demonstrates that inter-areal interactions across model visual areas V1-V2-V4 afford border-ownership signals similar to those reported in electrophysiology for visual displays containing figures defined by luminance contrast. Competition between model neurons with different receptive field sizes is crucial for reconciling the occlusion of one object by another. The model is extended to determine border-ownership when object borders are kinetically-defined, and to detect the location and size of shapes, despite the curvature of their boundary contours. Navigation in the real world requires humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature. In primates, MSTd has been implicated in heading perception. A model of V1, medial temporal area (MT), and MSTd is developed herein that demonstrates how MSTd neurons can simultaneously encode path curvature and heading. Human judgments of heading are accurate in rigid environments, but are biased in the presence of IMOs. The model presented here explains the bias through recurrent connectivity in MSTd and avoids the use of differential motion detectors which, although used in existing models to discount the motion of an IMO relative to its background, is not biologically plausible. Reported modulation of the MSTd population due to attention is explained through competitive dynamics between subpopulations responding to bottom-up and top- down signals

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing
    • …
    corecore