1,488 research outputs found

    A high speed Tri-Vision system for automotive applications

    Get PDF
    Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications. Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring). Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link® cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range. Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe

    An Experimental Study on Pitch Compensation in Pedestrian-Protection Systems for Collision Avoidance and Mitigation

    Full text link
    This paper describes an improved stereovision system for the anticipated detection of car-to-pedestrian accidents. An improvement of the previous versions of the pedestrian-detection system is achieved by compensation of the camera's pitch angle, since it results in higher accuracy in the location of the ground plane and more accurate depth measurements. The system has been mounted on two different prototype cars, and several real collision-avoidance and collision-mitigation experiments have been carried out in private circuits using actors and dummies, which represents one of the main contributions of this paper. Collision avoidance is carried out by means of deceleration strategies whenever the accident is avoidable. Likewise, collision mitigation is accomplished by triggering an active hood system

    An evaluation of the pedestrian classification in a multi-domain multi-modality setup

    Get PDF
    The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion

    Fusion Based Safety Application for Pedestrian Detection with Danger Estimation

    Get PDF
    Proceedings of: 14th International Conference on Information Fusion (FUSION 2011). Chicago, Illinois, USA 5-8 July 2011.Road safety applications require the most reliable data. In recent years data fusion is becoming one of the main technologies for Advance Driver Assistant Systems (ADAS) to overcome the limitations of isolated use of the available sensors and to fulfil demanding safety requirements. In this paper a real application of data fusion for road safety for pedestrian detection is presented. Two sets of automobile-emplaced sensors are used to detect pedestrians in urban environments, a laser scanner and a stereovision system. Both systems are mounted in the automobile research platform IVVI 2.0 to test the algorithms in real situations. The different safety issues necessary to develop this fusion application are described. Context information such as velocity and GPS information is also used to provide danger estimation for the detected pedestrians.This work was supported by the Spanish Government through the Cicyt projects FEDORA (GRANT TRA2010- 20225-C03-01 ) , VIDAS-Driver (GRANT TRA2010-21371-C03-02 ).Publicad

    Intelligent imaging systems for automotive applications

    Get PDF
    In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issue

    Applications of Computer Vision Technologies of Automated Crack Detection and Quantification for the Inspection of Civil Infrastructure Systems

    Get PDF
    Many components of existing civil infrastructure systems, such as road pavement, bridges, and buildings, are suffered from rapid aging, which require enormous nation\u27s resources from federal and state agencies to inspect and maintain them. Crack is one of important material and structural defects, which must be inspected not only for good maintenance of civil infrastructure with a high quality of safety and serviceability, but also for the opportunity to provide early warning against failure. Conventional human visual inspection is still considered as the primary inspection method. However, it is well established that human visual inspection is subjective and often inaccurate. In order to improve current manual visual inspection for crack detection and evaluation of civil infrastructure, this study explores the application of computer vision techniques as a non-destructive evaluation and testing (NDE&T) method for automated crack detection and quantification for different civil infrastructures. In this study, computer vision-based algorithms were developed and evaluated to deal with different situations of field inspection that inspectors could face with in crack detection and quantification. The depth, the distance between camera and object, is a necessary extrinsic parameter that has to be measured to quantify crack size since other parameters, such as focal length, resolution, and camera sensor size are intrinsic, which are usually known by camera manufacturers. Thus, computer vision techniques were evaluated with different crack inspection applications with constant and variable depths. For the fixed-depth applications, computer vision techniques were applied to two field studies, including 1) automated crack detection and quantification for road pavement using the Laser Road Imaging System (LRIS), and 2) automated crack detection on bridge cables surfaces, using a cable inspection robot. For the various-depth applications, two field studies were conducted, including 3) automated crack recognition and width measurement of concrete bridges\u27 cracks using a high-magnification telescopic lens, and 4) automated crack quantification and depth estimation using wearable glasses with stereovision cameras. From the realistic field applications of computer vision techniques, a novel self-adaptive image-processing algorithm was developed using a series of morphological transformations to connect fragmented crack pixels in digital images. The crack-defragmentation algorithm was evaluated with road pavement images. The results showed that the accuracy of automated crack detection, associated with artificial neural network classifier, was significantly improved by reducing both false positive and false negative. Using up to six crack features, including area, length, orientation, texture, intensity, and wheel-path location, crack detection accuracy was evaluated to find the optimal sets of crack features. Lab and field test results of different inspection applications show that proposed compute vision-based crack detection and quantification algorithms can detect and quantify cracks from different structures\u27 surface and depth. Some guidelines of applying computer vision techniques are also suggested for each crack inspection application

    DSmT Decision-Making Algorithms for Finding Grasping Configurations of Robot Dexterous Hands

    Get PDF
    In this paper, we present a deciding technique for robotic dexterous hand configurations. This algorithm can be used to decide on how to configure a robotic hand so it can grasp objects in different scenarios. Receiving as input, several sensor signals that provide information on the object’s shape, the DSmT decision-making algorithm passes the information through several steps before deciding what hand configuration should be used for a certain object and task

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
    corecore