77,529 research outputs found

    Object Detection from a Vehicle Using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm

    Get PDF
    Accuracy in detecting a moving object is critical to autonomous driving or advanced driver assistance systems (ADAS). By including the object classification from multiple sensor detections, the model of the object or environment can be identified more accurately. The critical parameters involved in improving the accuracy are the size and the speed of the moving object. All sensor data are to be used in defining a composite object representation so that it could be used for the class information in the core object’s description. This composite data can then be used by a deep learning network for complete perception fusion in order to solve the detection and tracking of moving objects problem. Camera image data from subsequent frames along the time axis in conjunction with the speed and size of the object will further contribute in developing better recognition algorithms. In this paper, we present preliminary results using only camera images for detecting various objects using deep learning network, as a first step toward multi-sensor fusion algorithm development. The simulation experiments based on camera images show encouraging results where the proposed deep learning network based detection algorithm was able to detect various objects with certain degree of confidence. A laboratory experimental setup is being commissioned where three different types of sensors, a digital camera with 8 megapixel resolution, a LIDAR with 40m range, and ultrasonic distance transducer sensors will be used for multi-sensor fusion to identify the object in real-time

    A method of evaluating ADAS camera performance in rain : case studies with hydrophilic and hydrophobic lenses

    Get PDF
    Abstract: Advanced driver assistance systems (ADAS) are increasingly being equipped in modern vehicles to provide safety warnings and autonomous functions. Cameras are a key component in ADAS which collects critical environmental information as inputs. Similar to human vision, cameras suffer performance degradation in adverse weather conditions. The impacts of precipitation, such as raindrops on camera lenses, cause blurring and obstruction of camera vision, which subsequently affects ADAS performance. The relationships between camera image quality, object detection accuracy, and surface wettability of camera lenses are investigated for different driving-in rain conditions. The goal is to link camera performance with ADAS performance from a practical perspective. Moreover, the use of hydrophilic and hydrophobic camera lenses is explored to provide insights into material selection when designing camera lenses for ADAS. The rain characteristics perceived by a moving vehicle at different driving speeds are simulated using a patent pending rain simulation system implemented into a wind tunnel. It is found that droplet characteristics, such as size, shape, and motion, can impact the camera image quality and, subsequently, object detection accuracy. The results suggest that the use of hydrophobic camera lenses promotes better performance over hydrophilic lenses in most cases, while object detection capability is restored more effectively on the hydrophilic lens when a water film layer is formed.Communication présentée lors du congrès international tenu conjointement par Canadian Society for Mechanical Engineering (CSME) et Computational Fluid Dynamics Society of Canada (CFD Canada), à l’Université de Sherbrooke (Québec), du 28 au 31 mai 2023

    Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking

    No full text
    International audience—The accurate detection and classification of moving objects is a critical aspect of Advanced Driver Assistance Systems (ADAS). We believe that by including the objects classification from multiple sensors detections as a key component of the object's representation and the perception process, we can improve the perceived model of the environment. First, we define a composite object representation to include class information in the core object's description. Second , we propose a complete perception fusion architecture based on the Evidential framework to solve the Detection and Tracking of Moving Objects (DATMO) problem by integrating the composite representation and uncertainty management. Finally, we integrate our fusion approach in a real-time application inside a vehicle demonstrator from the interactIVe IP European project which includes three main sensors: radar, lidar and camera. We test our fusion approach using real data from different driving scenarios and focusing on four objects of interest: pedestrian, bike, car and truck

    Multi-camera analysis of soccer sequences

    Get PDF
    The automatic detection of meaningful phases in a soccer game depends on the accurate localization of players and the ball at each moment. However, the automatic analysis of soccer sequences is a challenging task due to the presence of fast moving multiple objects. For this purpose, we present a multi-camera analysis system that yields the position of the ball and players on a common ground plane. The detection in each camera is based on a code-book algorithm and different features are used to classify the detected blobs. The detection results of each camera are transformed using homography to a virtual top-view of the playing field. Within this virtual top-view we merge trajectory information of the different cameras allowing to refine the found positions. In this paper, we evaluate the system on a public SOCCER dataset and end with a discussion of possible improvements of the dataset
    • …
    corecore