57 research outputs found
Scene Detection Classification and Tracking for Self-Driven Vehicle
A number of traffic-related issues, including crashes, jams, and pollution, could be resolved by self-driving vehicles (SDVs). Several challenges still need to be overcome, particularly in the areas of precise environmental perception, observed detection, and its classification, to allow the safe navigation of autonomous vehicles (AVs) in crowded urban situations. This article offers a comprehensive examination of the application of deep learning techniques in self-driving cars for scene perception and observed detection. The theoretical foundations of self-driving cars are examined in depth in this research using a deep learning methodology. It explores the current applications of deep learning in this area and provides critical evaluations of their efficacy. This essay begins with an introduction to the ideas of computer vision, deep learning, and self-driving automobiles. It also gives a brief review of artificial general intelligence, highlighting its applicability to the subject at hand. The paper then concentrates on categorising current, robust deep learning libraries and considers their critical contribution to the development of deep learning techniques. The dataset used as label for scene detection for self-driven vehicle. The discussion of several strategies that explicitly handle picture perception issues faced in real-time driving scenarios takes up a sizeable amount of the work. These methods include methods for item detection, recognition, and scene comprehension. In this study, self-driving automobile implementations and tests are critically assessed
Realization of Performance Advancements for WPI\u27s UGV - Prometheus
The objective of this project is to design and implement performance improvements for WPI\u27s intelligent ground vehicle, Prometheus, leading to a more competitive entry at the Intelligent Ground Vehicle Competition. Performance enhancements implemented by the project team include a new upper chassis design, a reconfigurable camera mount, extended Kalman filter-based localization with a GPS receiver and a compass module, a lane detection algorithm, and a modular software framework. As a result, Prometheus has improved autonomy, accessibility, robustness, reliability, and usability
Percepção do ambiente urbano e navegação usando visĂŁo robĂłtica : concepção e implementação aplicado Ă veĂculo autĂ´nomo
Orientadores: Janito Vaqueiro Ferreira, Alessandro CorrĂŞa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veĂculos autĂ´nomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefĂcios na redução de acidentes, no aumentando da qualidade de vida e tambĂ©m na redução de custos. VeĂculos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera tĂŞm recebido grande atenção pelo motivo de que eles sĂŁo de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas tambĂ©m desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veĂculos podem gerar observações parciais e tambĂ©m estas observações sĂŁo muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veĂculo. Nesta tese, este problema de percepção Ă© analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisĂŁo em navegação autĂ´noma. Projeta-se um sistema de percepção que permita veĂculos robĂłticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prĂ©vio do ambiente e considerando a presença de objetos dinâmicos tais como veĂculos. Propõe-se um novo mĂ©todo baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estĂ©reo, a qual Ă© vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisĂŁo no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possĂveis caminhos a partir do centro de referencia do veĂculo e com base nisto, duas novas estratĂ©gias sĂŁo propostas. Em primeiro, uma nova estratĂ©gia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual Ă© modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle sĂŁo implementados e validados experimentalmente em condições reais usando um veĂculo autĂ´nomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos SĂłlidos e Projeto MecanicoDoutor em Engenharia Mecânic
Nachweislich sichere Bewegungsplanung fĂĽr autonome Fahrzeuge durch Echtzeitverifikation
This thesis introduces fail-safe motion planning as the first approach to guarantee legal safety of autonomous vehicles in arbitrary traffic situations. The proposed safety layer verifies whether intended trajectories comply with legal safety and provides fail-safe trajectories when intended trajectories result in safety-critical situations. The presented results indicate that the use of fail-safe motion planning can drastically reduce the number of traffic accidents.Die vorliegende Arbeit führt ein neuartiges Verifikationsverfahren ein, mit dessen Hilfe zum ersten Mal die verkehrsregelkonforme Sicherheit von autonomen Fahrzeugen gewährleistet werden kann. Das Verifikationsverfahren überprüft, ob geplante Trajektorien sicher sind und generiert Rückfalltrajektorien falls diese zu einer unsicheren Situation führen. Die Ergebnisse zeigen, dass die Verwendung des Verfahrens zu einer deutlichen Reduktion von Verkehrsunfällen führt
Recommended from our members
Learning Birds-Eye View Representations for Autonomous Driving
Over the past few years, progress towards the ambitious goal of widespread fully-autonomous vehicles on our roads has accelerated dramatically. This progress has been spurred largely by the success of highly accurate LiDAR sensors, as well the use of detailed high-resolution maps, which together allow a vehicle to navigate its surroundings effectively. Often, however, one or both of these resources may be unavailable, whether due to cost, sensor failure, or the need to operate in an unmapped environment. The aim of this thesis is therefore to demonstrate that it is possible to build detailed three-dimensional representations of traffic scenes using only 2D monocular camera images as input. Such an approach faces many challenges: most notably that 2D images do not provide explicit 3D structure. We overcome this limitation by applying a combination of deep learning and geometry to transform image-based features into an orthographic birds-eye view representation of the scene, allowing algorithms to reason in a metric, 3D space. This approach is applied to solving two challenging perception tasks central to autonomous driving.
The first part of this thesis addresses the problem of monocular 3D object detection, which involves determining the size and location of all objects in the scene. Our solution was based on a novel convolutional network architecture that processed features in both the image and birds-eye view perspective. Results on the KITTI dataset showed that this network outperformed existing works at the time, and although more recent works have improved on these results, we conducted extensive analysis to find that our solution performed well in many difficult edge-case scenarios such as objects close to or distant from the camera.
In the second part of the thesis, we consider the related problem of semantic map prediction. This consists of estimating a birds-eye view map of the world visible from a given camera, encoding both static elements of the scene such as pavement and road layout, as well as dynamic objects such as vehicles and pedestrians. This was accomplished using a second network that built on the experience from the previous work and achieved convincing performance on two real-world driving datasets. By formulating the maps as an occupancy grid map (a widely used representation from robotics), we were able to demonstrate how predictions could be accumulated across multiple frames, and that doing so further improved the robustness of maps produced by our system.Toyota Motors Europ
Mind the Gap: Developments in Autonomous Driving Research and the Sustainability Challenge
Scientific knowledge on autonomous-driving technology is expanding at a faster-than-ever pace. As a result, the likelihood of incurring information overload is particularly notable for researchers, who can struggle to overcome the gap between information processing requirements and information processing capacity. We address this issue by adopting a multi-granulation approach to latent knowledge discovery and synthesis in large-scale research domains. The proposed methodology combines citation-based community detection methods and topic modeling techniques to give a concise but comprehensive overview of how the autonomous vehicle (AV) research field is conceptually structured. Thirteen core thematic areas are extracted and presented by mining the large data-rich environments resulting from 50 years of AV research. The analysis demonstrates that this research field is strongly oriented towards examining the technological developments needed to enable the widespread rollout of AVs, whereas it largely overlooks the wide-ranging sustainability implications of this sociotechnical transition. On account of these findings, we call for a broader engagement of AV researchers with the sustainability concept and we invite them to increase their commitment to conducting systematic investigations into the sustainability of AV deployment. Sustainability research is urgently required to produce an evidence-based understanding of what new sociotechnical arrangements are needed to ensure that the systemic technological change introduced by AV-based transport systems can fulfill societal functions while meeting the urgent need for more sustainable transport solutions
Implementing and Investigating Refractoriness in LGMD Neural Networks
Collision can be threatening for animals including human beings. Thus, reliable and accurate collision perception is vital in plenty of aspects. Taking inspiration from nature, the computational methods of lobula giant movement detectors (LGMDs) identified in flying locust’s visual pathways have positively demonstrated impacts on addressing this problem. However, collision perception methods based on visual cues are still challenged by several factors in physical world including ultra-fast approaching linear velocity and noisy signals. The current visual-cue-based LGMD neural networks could show ineffectiveness or generate false positive, especially when objects approach at fast velocity and when the video signals are polluted by noises. Hence, how ultra-fast approaching object in a colliding way can be detected remains to be further improved. Neural refractoriness, also known as refractory period (RP), a common mechanism inside animals’ neural system studied for decades, though it has been considered to play a significant role in stabilising a neuron, has not been researched in the aforementioned LGMD neural networks for accurate and reliable collision perception. In this thesis, a novel method phenomenologically simulating neural refractoriness inside animals’ neural systems is proposed and is further investigated on its functionality and efficacy when it is combined with the classic LGMD1 and LGMD2 neuronal networks for collision perception. Our systematically experimental results demonstrate that, mimicking refractoriness not only enhances the LGMD1 models in terms of reliability and stability when facing ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt & Pepper noise. Potential proof of LGMD2 neural network’s reliability and its capability to adapt to cluttered physical world is also provided. This research shows that, modelling of refractoriness can be effective and benefiting in collision perception neuronal networks, and be promising to address the aforementioned challenges for collision perception
Toward Human-Like Automated Driving: Learning Spacing Profiles From Human Driving Data
For automated driving vehicles to be accepted by their users and safely integrate with traffic involving human drivers, they need to act and behave like human drivers. This not only involves understanding how the human driver or occupant in the automated vehicle expects their vehicle to operate, but also involves how other road users perceive the automated vehicle’s intentions. This research aimed at learning how drivers space themselves while driving around other vehicles. It is shown that an optimized lane change maneuver does create a solution that is much different than what a human would do. There is a need to learn complex driving preferences from studying human drivers.
This research fills the gap in terms of learning human driving styles by providing an example of learned behavior (vehicle spacing) and the needed framework for encapsulating the learned data. A complete framework from problem formulation to data gathering and learning from human driving data was formulated as part of this research. On-road vehicle data were gathered while a human driver drove a vehicle. The driver was asked to make lane changes for stationary vehicles in his path with various road curvature conditions and speeds. The gathered data, as well as Learning from Demonstration techniques, were used in formulating the spacing profile as a lane change maneuver. A concise feature set from captured data was identified to strongly represent a driver’s spacing profile and a model was developed. The learned model represented the driver’s spacing profile from stationary vehicles within acceptable statistical tolerance. This work provides a methodology for many other scenarios from which human-like driving style and related parameters can be learned and applied to automated vehicle
- …