235 research outputs found

    Improving Steering Ability of an Autopilot in a Fully Autonomous Car

    Get PDF
    The world we live in is developing at a really rapid pace and along with it is developing the technology that we use. We have clearly come a long way from calling a car modern because it had a touch screen infotainment system to calling it modern because it drives on its own. The progress has been so rapid that it demands for us to analyze this and try to improvise a small part of this journey. With the same thought in mind, this project focuses on improvising the steering ability of an autonomous car. In order to make more sense of what is an autonomous car and what all goes on inside the working of a car that runs on its own, the thesis will be divided into a part of it explaining the theory and a part of it explaining the logic and results achieved as a part of the experiments performed on the data. It will show how an autonomous car’s CNN model can keep on becoming better as we keep on feeding more and more data into it. It will also show how a CNN model is a generalized model which can not only be used for steering a car but also to control and predict the speed of the car or the breaking ability of the car

    AILiveSim : An Extensible Virtual Environment for Training Autonomous Vehicles

    Get PDF
    Virtualization technologies have become common- place both in software development as well as engineering in a more general sense. Using virtualization offers other benefits than simulation and testing as a virtual environment can often be more liberally configured than the corresponding physical envi- ronment. This, in turn, introduces new possibilities for education and training, including both for humans and artificial intelligence (AI). To this end, we are developing a simulation platform AILiveSim. The platform is built on top of the Unreal Engine game development system, and it is dedicated to training and testing autonomous systems, their sensors and their algorithms in a simulated environment. In this paper, we describe the elements that we have built on top of the engine to realize a Virtual Environment (VE) useful for the design, implementation, application and analysis of autonomous systems. We present the architecture that we have put in place to transform our simulation platform from automotive specific to be domain agnostic and support two new domains of applications: autonomous ships and autonomous mining machines. We describe the important specificity of each domain in regard to simulation. In addition, we also report the challenges encountered when simulating those applications, and the decisions taken to overcome these challenges.Peer reviewe

    Towards Increasing the Robustness of Predictive Steering-Control Autonomous Navigation Systems Against Dash Cam Image Angle Perturbations Due to Pothole Encounters

    Full text link
    Vehicle manufacturers are racing to create autonomous navigation and steering control algorithms for their vehicles. These software are made to handle various real-life scenarios such as obstacle avoidance and lane maneuvering. There is some ongoing research to incorporate pothole avoidance into these autonomous systems. However, there is very little research on the effect of hitting a pothole on the autonomous navigation software that uses cameras to make driving decisions. Perturbations in the camera angle when hitting a pothole can cause errors in the predicted steering angle. In this paper, we present a new model to compensate for such angle perturbations and reduce any errors in steering control prediction algorithms. We evaluate our model on perturbations of publicly available datasets and show our model can reduce the errors in the estimated steering angle from perturbed images to 2.3%, making autonomous steering control robust against the dash cam image angle perturbations induced when one wheel of a car goes over a pothole.Comment: 7 pages, 6 figure

    Real-Time GPS-Alternative Navigation Using Commodity Hardware

    Get PDF
    Modern navigation systems can use the Global Positioning System (GPS) to accurately determine position with precision in some cases bordering on millimeters. Unfortunately, GPS technology is susceptible to jamming, interception, and unavailability indoors or underground. There are several navigation techniques that can be used to navigate during times of GPS unavailability, but there are very few that result in GPS-level precision. One method of achieving high precision navigation without GPS is to fuse data obtained from multiple sensors. This thesis explores the fusion of imaging and inertial sensors and implements them in a real-time system that mimics human navigation. In addition, programmable graphics processing unit technology is leveraged to perform stream-based image processing using a computer\u27s video card. The resulting system can perform complex mathematical computations in a fraction of the time those same operations would take on a CPU-based platform. The resulting system is an adaptable, portable, inexpensive and self-contained software and hardware platform, which paves the way for advances in autonomous navigation, mobile cartography, and artificial intelligence

    Big Data Scenarios Simulator for Deep Learning Algorithm Evaluation for Autonomous Vehicle

    Get PDF
    One of the challenges in developing autonomous vehicles (AV) is the collection of suitable real environment data for the training and evaluation of machine learning algorithms for autonomous vehicles. Such environment data collection via various sensors mounted on AV is big data in nature which require massive time and money investment and in some specific scenarios could pose a significant danger to human lives. This necessitates the virtual scenarios simulator to simulate the real environment by generating big data images from a virtual fisheye lens that can mimic the field of view and radial distortion of commercial available camera lens of any manufacturer and model. In this paper, we proposed the novelty of developing a fisheye lens with distortion system to generate big data scenarios images to train and test imaged based sensing functions and to evaluate scenarios according to EuroNCap standards. A total of 10,123 RGB, depth and segmentation images of varying road scenarios were generated by proposed system in approximately 14 hours as compared to existing methods of 20 hours, achieving 42.86% improvement

    CiThruS2 : Open-source Photorealistic 3D Framework for Driving and Traffic Simulation in Real Time

    Get PDF
    The automotive and transport sector is undergoing a paradigm shift from manual to highly automated driving. This transition is driven by a proliferation of advanced driver assistance systems (ADAS) that seek to provide vehicle occupants with a safe, efficient, and comfortable driving experience. However, increasing the level of automation makes exhaustive physical testing of ADAS technologies impractical. Therefore, the automotive industry is increasingly turning to virtual simulation platforms to speed up time-to-market. This paper introduces the second version of our open-source See-Through Sight (CiThruS) simulation framework that provides a novel photorealistic virtual environment for vision-based ADAS development. Our 3D urban scene supports realistic traffic infrastructure and driving conditions with a plurality of time-of-day, weather, and lighting effects. Different traffic scenarios can be generated with practically any number of autonomous vehicles and pedestrians that can be made to comply with dedicated traffic regulations. All implemented features have been carefully optimized and the performance of our lightweight simulator exceeds 4K (3840 × 2160) rendering speed of 60 frames per second when run on NVIDIA GTX 1060 graphics card or equivalent consumer-grade hardware. Photorealistic graphics rendering and real-time simulation speed make our proposal suitable for a broad range of applications, including interactive driving simulators, visual traffic data collection, virtual prototyping, and traffic flow management.acceptedVersionPeer reviewe

    Multiple View Texture Mapping: A Rendering Approach Designed for Driving Simulation

    Get PDF
    Simulation provides a safe and controlled environment ideal for human testing [49, 142, 120]. Simulation of real environments has reached new heights in terms of photo-realism. Often, a team of professional graphical artists would have to be hired to compete with modern commercial simulators. Meanwhile, machine vision methods are currently being developed that attempt to automatically provide geometrically consistent and photo-realistic 3D models of real scenes [189, 139, 115, 19, 140, 111, 132]. Often the only requirement is a set of images of that scene. A road engineer wishing to simulate the environment of a real road for driving experiments could potentially use these tools. This thesis develops a driving simulator that uses machine vision methods to reconstruct a real road automatically. A computer graphics method called projective texture mapping is applied to enhance the photo-realism of the 3D models[144, 43]. This essentially creates a virtual projector in the 3D environment to automatically assign image coordinates to a 3D model. These principles are demonstrated using custom shaders developed for an OpenGL rendering pipeline. Projective texture mapping presents a list of challenges to overcome, these include reverse projection and projection onto surfaces not immediately in front of the projector [53]. A significant challenge was the removal of dynamic foreground objects. 3D reconstruction systems create 3D models based on static objects captured in images. Dynamic objects are rarely reconstructed. Projective texture mapping of images, including these dynamic objects, can result in visual artefacts. A workflow is developed to resolve this, resulting in videos and 3D reconstructions of streets with no moving vehicles on the scene. The final simulator using 3D reconstruction and projective texture mapping is then developed. The rendering camera had a motion model introduced to enable human interaction. The final system is presented, experimentally tested, and future potential works are discussed

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm
    • …
    corecore