54 research outputs found
Traffic scene awareness for intelligent vehicles using ConvNets and stereo vision
In this paper, we propose an efficient approach to perform recognition and 3D localization of dynamic objects on images from a stereo camera, with the goal of gaining insight into traffic scenes in urban and road environments. We rely on a deep learning framework able to simultaneously identify a broad range of entities, such as vehicles, pedestrians or cyclists, with a frame rate compatible with the strict requirements of onboard automotive applications. Stereo information is later introduced to enrich the knowledge about the objects with geometrical information. The results demonstrate the capabilities of the perception system for a wide variety of situations, thus providing valuable information for a higher-level understanding of the traffic situation
Intent prediction of vulnerable road users for trusted autonomous vehicles
This study investigated how future autonomous vehicles could be further trusted by vulnerable road users (such as pedestrians and cyclists) that they would be interacting with in urban traffic environments. It focused on understanding the behaviours of such road users on a deeper level by predicting their future intentions based solely on vehicle-based sensors and AI techniques. The findings showed that personal/body language attributes of vulnerable road users besides their past motion trajectories and physics attributes in the environment led to more accurate predictions about their intended actions
Editorial: special issue on autonomous driving and driver assistance systems
No abstract availablepublishe
Unifying terrain awareness for the visually impaired through real-time semantic segmentation.
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework
Data-fused urban mobility applications for smart cities
Though vehicles are becoming more advanced with added safety feature technology, we must still rely on our own instincts and senses to make decisions. This thesis presents two applications that can be utilized by drivers, passengers, or pedestrians and allow a wider range of visibility during commutes. The first application uses the concept of see-through technology to assist the driver with a real-time augmented view of a traffic scene that in reality may be blocked by the vehicle in front. The second application presents a mobile application that utilizes two sources to gather the user\u27s location information, one using absolute location from a Global Positioning System (GPS) enabled device and the other from merging the concepts of computer vision, object detection, and mono-vision depth calculation, and place each instance of an identified object on the mapping application. Currently, mapping items such as stores, accidents, and traffic conditions are very common, but this application takes into account the location of individual users to give a holistic view of people instead of places
- …