266 research outputs found

    Advances in Stereo Vision

    Get PDF
    Stereopsis is a vision process whose geometrical foundation has been known for a long time, ever since the experiments by Wheatstone, in the 19th century. Nevertheless, its inner workings in biological organisms, as well as its emulation by computer systems, have proven elusive, and stereo vision remains a very active and challenging area of research nowadays. In this volume we have attempted to present a limited but relevant sample of the work being carried out in stereo vision, covering significant aspects both from the applied and from the theoretical standpoints

    A Sensor for Urban Driving Assistance Systems Based on Dense Stereovision

    Get PDF
    Advanced driving assistance systems (ADAS) form a complex multidisciplinary research field, aimed at improving traffic efficiency and safety. A realistic analysis of the requirements and of the possibilities of the traffic environment leads to the establishment of several goals for traffic assistance, to be implemented in the near future (ADASE, INVENT

    Obstacle Detection Based on Fusion Between Stereovision and 2D Laser Scanner

    Get PDF
    International audienceObstacle detection is an essential task for mobile robots. This subject has been investigated for many years by researchers and a lot of obstacle detection systems have been proposed so far. Yet designing an accurate and totally robust and reliable system remains a challenging task, above all in outdoor environments. Thus, the purpose of this chapter is to present new techniques and tools to design an accurate, robust and reliable obstacle detection system in outdoor environments based on a minimal number of sensors. So far, experiments and assessments of already developed systems show that using a single sensor is not enough to meet the requirements: at least two complementary sensors are needed. In this chapter a stereovision sensor and a 2D laser scanner are considered

    Depth Perception, Cueing, and Control

    Get PDF
    Humans rely on a variety of visual cues to inform them of the depth or range of a particular object or feature. Some cues are provided by physiological mechanisms, others from pictorial cues that are interpreted psychologically, and still others by the relative motions of objects or features induced by observer (or vehicle) motions. These cues provide different levels of information (ordinal, relative, absolute) and saliency depending upon depth, task, and interaction with other cues. Display technologies used for head-down and head-up displays, as well as out-the-window displays, have differing capabilities for providing depth cueing information to the observeroperator. In addition to technologies, display content and the source (camera sensor versus computer rendering) provide varying degrees of cue information. Additionally, most displays create some degree of cue conflict. In this paper, visual depth cues and their interactions will be discussed, as well as display technology and content and related artifacts. Lastly, the role of depth cueing in performing closed-loop control tasks will be discussed

    Global-referenced navigation grids for off-road vehicles and environments

    Full text link
    [EN] The presence of automation and information technology in agricultural environments seems no longer questionable; smart spraying, variable rate fertilizing, or automatic guidance are becoming usual management tools in modern farms. Yet, such techniques are still in their nascence and offer a lively hotbed for innovation. In particular, significant research efforts are being directed toward vehicle navigation and awareness in off-road environments. However, the majority of solutions being developed are based on occupancy grids referenced with odometry and dead-reckoning, or alternatively based on GPS waypoint following, but never based on both. Yet, navigation in off-road environments highly benefits from both approaches: perception data effectively condensed in regular grids, and global references for every cell of the grid. This research proposes a framework to build globally referenced navigation grids by combining three-dimensional stereo vision with satellite-based global positioning. The construction process entails the in-field recording of perceptual information plus the geodetic coordinates of the vehicle at every image acquisition position, in addition to other basic data as velocity, heading, or GPS quality indices. The creation of local grids occurs in real time right after the stereo images have been captured by the vehicle in the field, but the final assembly of universal grids takes place after finishing the acquisition phase. Vehicle-fixed individual grids are then superposed onto the global grid, transferring original perception data to universal cells expressed in Local Tangent Plane coordinates. Global referencing allows the discontinuous appendage of data to succeed in the completion and updating of navigation grids along the time over multiple mapping sessions. This methodology was validated in a commercial vineyard, where several universal grids of the crops were generated. Vine rows were correctly reconstructed, although some difficulties appeared around the headland turns as a consequence of unreliable heading estimations. Navigation information conveyed through globally referenced regular grids turned out to be a powerful tool for upcoming practical implementations within agricultural robotics. (C) 2011 Elsevier B.V. All rights reserved.The author would like to thank Juan Jose Pena Suarez and Montano Perez Teruel for their assistance in the preparation of the prototype vehicle, Veronica Saiz Rubio for her help during most of the field experiments, Ratul Banerjee for his contribution in the development of software, and Luis Gil-Orozco Esteve for granting permission to perform multiple tests in the vineyards of his winery Finca Ardal. Gratitude is also extended to the Spanish Ministry of Science and Innovation for funding this research through project AGL2009-11731.Rovira Más, F. (2011). Global-referenced navigation grids for off-road vehicles and environments. Robotics and Autonomous Systems. 60(2):278-287. https://doi.org/10.1016/j.robot.2011.11.007S27828760

    Development of a stereo imaging system for three-dimensional shape measurement of crystals

    Get PDF
    Despite the availability of various Process Analytical Technologies (PAT) for measuring other particle properties, their inherit limitations for the measurement of crystal shape have been restricted. This has impacted, in turn, on the development and implementation of optimisation, monitoring and control of crystal shape and size distributions within particle formulation and processing systems In recent years, imaging systems have shown to be a very promising PAT technique for the measurement of crystal growth, but still essentially limited as a technique only to provide two-dimensional information. The idea of using two synchronized cameras to obtain 3D crystal shape was mentioned previously (Chem Eng Sci 63(5) 1171-1184, 2008) but no quantitative results were reported. In this paper, a methodology which can directly image the full three-dimensional shape of crystals has been developed. It is based on the mathematical principle that if the two-dimensional images of an object are obtained from two different angles, the full three-dimensional crystal shape can be reconstructed. A proof of concept study has been carried out to demonstrate the potentials in using the system for the three-dimensional measurement of crystals

    Neuromorphic Event-Based Generalized Time-Based Stereovision

    Get PDF
    3D reconstruction from multiple viewpoints is an important problem in machine vision that allows recovering tridimensional structures from multiple two-dimensional views of a given scene. Reconstructions from multiple views are conventionally achieved through a process of pixel luminance-based matching between different views. Unlike conventional machine vision methods that solve matching ambiguities by operating only on spatial constraints and luminance, this paper introduces a fully time-based solution to stereovision using the high temporal resolution of neuromorphic asynchronous event-based cameras. These cameras output dynamic visual information in the form of what is known as “change events” that encode the time, the location and the sign of the luminance changes. A more advanced event-based camera, the Asynchronous Time-based Image Sensor (ATIS), in addition of change events, encodes absolute luminance as time differences. The stereovision problem can then be formulated solely in the time domain as a problem of events coincidences detection problem. This work is improving existing event-based stereovision techniques by adding luminance information that increases the matching reliability. It also introduces a formulation that does not require to build local frames (though it is still possible) from the luminances which can be costly to implement. Finally, this work also introduces a methodology for time based stereovision in the context of binocular and trinocular configurations using time based event matching criterion combining for the first time all together: space, time, luminance, and motion

    Applications of Computer Vision Technologies of Automated Crack Detection and Quantification for the Inspection of Civil Infrastructure Systems

    Get PDF
    Many components of existing civil infrastructure systems, such as road pavement, bridges, and buildings, are suffered from rapid aging, which require enormous nation\u27s resources from federal and state agencies to inspect and maintain them. Crack is one of important material and structural defects, which must be inspected not only for good maintenance of civil infrastructure with a high quality of safety and serviceability, but also for the opportunity to provide early warning against failure. Conventional human visual inspection is still considered as the primary inspection method. However, it is well established that human visual inspection is subjective and often inaccurate. In order to improve current manual visual inspection for crack detection and evaluation of civil infrastructure, this study explores the application of computer vision techniques as a non-destructive evaluation and testing (NDE&T) method for automated crack detection and quantification for different civil infrastructures. In this study, computer vision-based algorithms were developed and evaluated to deal with different situations of field inspection that inspectors could face with in crack detection and quantification. The depth, the distance between camera and object, is a necessary extrinsic parameter that has to be measured to quantify crack size since other parameters, such as focal length, resolution, and camera sensor size are intrinsic, which are usually known by camera manufacturers. Thus, computer vision techniques were evaluated with different crack inspection applications with constant and variable depths. For the fixed-depth applications, computer vision techniques were applied to two field studies, including 1) automated crack detection and quantification for road pavement using the Laser Road Imaging System (LRIS), and 2) automated crack detection on bridge cables surfaces, using a cable inspection robot. For the various-depth applications, two field studies were conducted, including 3) automated crack recognition and width measurement of concrete bridges\u27 cracks using a high-magnification telescopic lens, and 4) automated crack quantification and depth estimation using wearable glasses with stereovision cameras. From the realistic field applications of computer vision techniques, a novel self-adaptive image-processing algorithm was developed using a series of morphological transformations to connect fragmented crack pixels in digital images. The crack-defragmentation algorithm was evaluated with road pavement images. The results showed that the accuracy of automated crack detection, associated with artificial neural network classifier, was significantly improved by reducing both false positive and false negative. Using up to six crack features, including area, length, orientation, texture, intensity, and wheel-path location, crack detection accuracy was evaluated to find the optimal sets of crack features. Lab and field test results of different inspection applications show that proposed compute vision-based crack detection and quantification algorithms can detect and quantify cracks from different structures\u27 surface and depth. Some guidelines of applying computer vision techniques are also suggested for each crack inspection application
    • …
    corecore