1,154 research outputs found

    Risk analysis of autonomous vehicle and its safety impact on mixed traffic stream

    Get PDF
    In 2016, more than 35,000 people died in traffic crashes, and human error was the reason for 94% of these deaths. Researchers and automobile companies are testing autonomous vehicles in mixed traffic streams to eliminate human error by removing the human driver behind the steering wheel. However, recent autonomous vehicle crashes while testing indicate the necessity for a more thorough risk analysis. The objectives of this study were (1) to perform a risk analysis of autonomous vehicles and (2) to evaluate the safety impact of these vehicles in a mixed traffic stream. The overall research was divided into two phases: (1) risk analysis and (2) simulation of autonomous vehicles. Risk analysis of autonomous vehicles was conducted using the fault tree method. Based on failure probabilities of system components, two fault tree models were developed and combined to predict overall system reliability. It was found that an autonomous vehicle system could fail 158 times per one-million miles of travel due to either malfunction in vehicular components or disruption from infrastructure components. The second phase of this research was the simulation of an autonomous vehicle, where change in crash frequency after autonomous vehicle deployment in a mixed traffic stream was assessed. It was found that average travel time could be reduced by about 50%, and 74% of conflicts, i.e., traffic crashes, could be avoided by replacing 90% of the human drivers with autonomous vehicles

    Semantic Mapping of Road Scenes

    Get PDF
    The problem of understanding road scenes has been on the fore-front in the computer vision community for the last couple of years. This enables autonomous systems to navigate and understand the surroundings in which it operates. It involves reconstructing the scene and estimating the objects present in it, such as ‘vehicles’, ‘road’, ‘pavements’ and ‘buildings’. This thesis focusses on these aspects and proposes solutions to address them. First, we propose a solution to generate a dense semantic map from multiple street-level images. This map can be imagined as the bird’s eye view of the region with associated semantic labels for ten’s of kilometres of street level data. We generate the overhead semantic view from street level images. This is in contrast to existing approaches using satellite/overhead imagery for classification of urban region, allowing us to produce a detailed semantic map for a large scale urban area. Then we describe a method to perform large scale dense 3D reconstruction of road scenes with associated semantic labels. Our method fuses the depth-maps in an online fashion, generated from the stereo pairs across time into a global 3D volume, in order to accommodate arbitrarily long image sequences. The object class labels estimated from the street level stereo image sequence are used to annotate the reconstructed volume. Then we exploit the scene structure in object class labelling by performing inference over the meshed representation of the scene. By performing labelling over the mesh we solve two issues: Firstly, images often have redundant information with multiple images describing the same scene. Solving these images separately is slow, where our method is approximately a magnitude faster in the inference stage compared to normal inference in the image domain. Secondly, often multiple images, even though they describe the same scene result in inconsistent labelling. By solving a single mesh, we remove the inconsistency of labelling across the images. Also our mesh based labelling takes into account of the object layout in the scene, which is often ambiguous in the image domain, thereby increasing the accuracy of object labelling. Finally, we perform labelling and structure computation through a hierarchical robust PN Markov Random Field defined on voxels and super-voxels given by an octree. This allows us to infer the 3D structure and the object-class labels in a principled manner, through bounded approximate minimisation of a well defined and studied energy functional. In this thesis, we also introduce two object labelled datasets created from real world data. The 15 kilometre Yotta Labelled dataset consists of 8,000 images per camera view of the roadways of the United Kingdom with a subset of them annotated with object class labels and the second dataset is comprised of ground truth object labels for the publicly available KITTI dataset. Both the datasets are available publicly and we hope will be helpful to the vision research community

    Designing for Diabetes Decision Support Systems with Fluid Contextual Reasoning

    Get PDF
    Type 1 diabetes is a potentially life-threatening chronic condition that requires frequent interactions with diverse data to inform treatment decisions. While mobile technolo- gies such as blood glucose meters have long been an essen- tial part of this process, designing interfaces that explicitly support decision-making remains challenging. Dual-process models are a common approach to understanding such cog- nitive tasks. However, evidence from the first of two stud- ies we present suggests that in demanding and complex situations, some individuals approach disease management in distinctive ways that do not seem to fit well within existing models. This finding motivated, and helped frame our second study, a survey (n=192) to investigate these behaviors in more detail. On the basis of the resulting analysis, we posit Fluid Contextual Reasoning to explain how some people with diabetes respond to particular situations, and discuss how an extended framework might help inform the design of user interfaces for diabetes management

    Transformational Autonomy and Personal Transportation: Synergies and Differences Between Cars and Planes

    Get PDF
    Highly automated cars have undergone tremendous investment and progress over the past ten years with speculation about fully-driverless cars within the foreseeable, or even near future, becoming common. If a driverless future is realized, what might be the impact on personal aviation? Would self-piloting airplanes be a relatively simple spin-off, possibly making travel by personal aircraft also commonplace? What if the technology for completely removing human drivers turns out to be further in the future rather than sooner; would such a delay suggest that transformational personal aviation is also somewhere over the horizon or can transformation be achieved with less than full automation? This paper presents a preliminary exploration of these questions by comparing the operational, functional, and implementation requirements and constraints of cars and small aircraft for on-demand mobility. In general, we predict that the mission management and perception requirements of self-piloting aircraft differ significantly from self-driving cars and requires the development of aviation specific technologies. We also predict that the highly-reliable control and system automation technology developed for conditionally and highly automated cars can have a significant beneficial effect on personal aviation, even if full automation is not immediately feasible

    Robust Localization in 3D Prior Maps for Autonomous Driving.

    Full text link
    In order to navigate autonomously, many self-driving vehicles require precise localization within an a priori known map that is annotated with exact lane locations, traffic signs, and additional metadata that govern the rules of the road. This approach transforms the extremely difficult and unpredictable task of online perception into a more structured localization problem—where exact localization in these maps provides the autonomous agent a wealth of knowledge for safe navigation. This thesis presents several novel localization algorithms that leverage a high-fidelity three-dimensional (3D) prior map that together provide a robust and reliable framework for vehicle localization. First, we present a generic probabilistic method for localizing an autonomous vehicle equipped with a 3D light detection and ranging (LIDAR) scanner. This proposed algorithm models the world as a mixture of several Gaussians, characterizing the z-height and reflectivity distribution of the environment—which we rasterize to facilitate fast and exact multiresolution inference. Second, we propose a visual localization strategy that replaces the expensive 3D LIDAR scanners with significantly cheaper, commodity cameras. In doing so, we exploit a graphics processing unit to generate synthetic views of our belief environment, resulting in a localization solution that achieves a similar order of magnitude error rate with a sensor that is several orders of magnitude cheaper. Finally, we propose a visual obstacle detection algorithm that leverages knowledge of our high-fidelity prior maps in its obstacle prediction model. This not only provides obstacle awareness at high rates for vehicle navigation, but also improves our visual localization quality as we are cognizant of static and non-static regions of the environment. All of these proposed algorithms are demonstrated to be real-time solutions for our self-driving car.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133410/1/rwolcott_1.pd

    Intelligence Without Reason

    Get PDF
    Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligenge has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation

    A Knowledge-based Approach for Creating Detailed Landscape Representations by Fusing GIS Data Collections with Associated Uncertainty

    Get PDF
    Geographic Information Systems (GIS) data for a region is of different types and collected from different sources, such as aerial digitized color imagery, elevation data consisting of terrain height at different points in that region, and feature data consisting of geometric information and properties about entities above/below the ground in that region. Merging GIS data and understanding the real world information present explicitly or implicitly in that data is a challenging task. This is often done manually by domain experts because of their superior capability to efficiently recognize patterns, combine, reason, and relate information. When a detailed digital representation of the region is to be created, domain experts are required to make best-guess decisions about each object. For example, a human would create representations of entities by collectively looking at the data layers, noting even elements that are not visible, like a covered overpass or underwater tunnel of a certain width and length. Such detailed representations are needed for use by processes like visualization or 3D modeling in applications used by military, simulation, earth sciences and gaming communities. Many of these applications are increasingly using digitally synthesized visuals and require detailed digital 3D representations to be generated quickly after acquiring the necessary initial data. Our main thesis, and a significant research contribution of this work, is that this task of creating detailed representations can be automated to a very large extent using a methodology which first fuses all Geographic Information System (GIS) data sources available into knowledge base (KB) assertions (instances) representing real world objects using a subprocess called GIS2KB. Then using reasoning, implicit information is inferred to define detailed 3D entity representations using a geometry definition engine called KB2Scene. Semantic Web is used as the semantic inferencing system and is extended with a data extraction framework. This framework enables the extraction of implicit property information using data and image analysis techniques. The data extraction framework supports extraction of spatial relationship values and attribution of uncertainties to inferred details. Uncertainty is recorded per property and used under Zadeh fuzzy semantics to compute a resulting uncertainty for inferred assertional axioms. This is achieved by another major contribution of our research, a unique extension of the KB ABox Realization service using KB explanation services. Previous semantics based research in this domain has concentrated more on improving represented details through the addition of artifacts like lights, signage, crosswalks, etc. Previous attempts regarding uncertainty in assertions use a modified reasoner expressivity and calculus. Our work differs in that separating formal knowledge from data processing allows fusion of different heterogeneous data sources which share the same context. Imprecision is modeled through uncertainty on assertions without defining a new expressivity as long as KB explanation services are available for the used expressivity. We also believe that in our use case, this simplifies uncertainty calculations. The uncertainties are then available for user-decision at output. We show that the process of creating 3D visuals from GIS data sources can be more automated, modular, verifiable, and the knowledge base instances available for other applications to use as part of a common knowledge base. We define our method’s components, discuss advantages and limitations, and show sample results for the transportation domain
    • …
    corecore