542 research outputs found

    Fuzzy Logic Based Digital Image Edge Detection

    Get PDF

    Risk analysis of autonomous vehicle and its safety impact on mixed traffic stream

    Get PDF
    In 2016, more than 35,000 people died in traffic crashes, and human error was the reason for 94% of these deaths. Researchers and automobile companies are testing autonomous vehicles in mixed traffic streams to eliminate human error by removing the human driver behind the steering wheel. However, recent autonomous vehicle crashes while testing indicate the necessity for a more thorough risk analysis. The objectives of this study were (1) to perform a risk analysis of autonomous vehicles and (2) to evaluate the safety impact of these vehicles in a mixed traffic stream. The overall research was divided into two phases: (1) risk analysis and (2) simulation of autonomous vehicles. Risk analysis of autonomous vehicles was conducted using the fault tree method. Based on failure probabilities of system components, two fault tree models were developed and combined to predict overall system reliability. It was found that an autonomous vehicle system could fail 158 times per one-million miles of travel due to either malfunction in vehicular components or disruption from infrastructure components. The second phase of this research was the simulation of an autonomous vehicle, where change in crash frequency after autonomous vehicle deployment in a mixed traffic stream was assessed. It was found that average travel time could be reduced by about 50%, and 74% of conflicts, i.e., traffic crashes, could be avoided by replacing 90% of the human drivers with autonomous vehicles

    Rapid Inspection of Pavement Markings Using Mobile Laser Scanning Point Clouds

    Get PDF
    Intelligent Transportation System (ITS) is the combination of information technology, sensors and communications for more efficient, safer, more secure and more eco-friendly surface transport. One of the most viable forms of ITS is the driverless car, which exist mainly as prototypes. Serval automobile manufacturers (e.g. Ford, GM, BMW, Toyota, Tesla, Honda) and non-automobile companies (e.g. Apple, Google, Nokia, Baidu, Huawei) have invested in this field, and wider commercialization of the driverless car is estimated in 2025 to 2030. Currently, the key elements of the driverless car are the sensors and a prior 3D map. The sensors mounted on the vehicle are the “eyes” of the driverless car to capture the 3D data of its environment. Comparing its environment and a pre-prepared prior known 3D map, the driverless car can distinguish moving targets (e.g. vehicles, pedestrians) and permanent surface features (e.g. buildings, trees, roads, traffic signs) and take relevant actions. With a centimetre-accuracy prior map, the intractable perception problem is transformed into a solvable localization task. The most important technology for generating the prior map is Mobile Laser Scanning (MLS). MLS technology can safely and rapidly acquire highly dense and accurate georeferenced 3D point clouds with the measurement of surface reflectivity. Therefore, the 3D point clouds with intensity data not only offer the detailed 3D surface of the road but also contains pavement marking information that are embedded in the prior map for automatic navigation. Relevant researches have been focused on the pavement marking extraction from MLS data to collect, update and maintain the 3D prior maps. However, the accuracy and efficiency of automatic extraction of pavement markings can be further improved by intensity correction and window-based enhancement. Thus, this study aims at building a robust method for semi-automated information extraction of pavement markings detected from MLS point clouds. The proposed workflow consists of three components: preprocessing, extraction, and classification. In preprocessing, the 3D MLS point clouds are converted into the radiometrically corrected and enhanced 2D intensity imagery of the road surface. Then the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu’s thresholding, neighbour-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters using a manually defined decision tree. Case studies are conducted using the MLS datasets acquired in Kingston (Ontario, Canada) and Xiamen (Fujian, China), respectively, with significantly different road environments by two RIEGL VMX-450 systems. The results demonstrated that the proposed workflow and method can achieve 93% in completeness, 95% in correctness, and 94% in F-score respectively when using Xiamen dataset, while 84%, 93%, 89% respectively when using Kingston dataset

    Cognitively-Engineered Multisensor Data Fusion Systems for Military Applications

    Get PDF
    The fusion of imagery from multiple sensors is a field of research that has been gaining prominence in the scientific community in recent years. The technical aspects of combining multisensory information have been and are currently being studied extensively. However, the cognitive aspects of multisensor data fusion have not received so much attention. Prior research in the field of cognitive engineering has shown that the cognitive aspects of any human-machine system should be taken into consideration in order to achieve systems that are both safe and useful. The goal of this research was to model how humans interpret multisensory data, and to evaluate the value of a cognitively-engineered multisensory data fusion system as an effective, time-saving means of presenting information in high- stress situations. Specifically, this research used principles from cognitive engineering to design, implement, and evaluate a multisensor data fusion system for pilots in high-stress situations. Two preliminary studies were performed, and concurrent protocol analysis was conducted to determine how humans interpret and mentally fuse information from multiple sensors in both low- and high-stress environments. This information was used to develop a model for human processing of information from multiple data sources. This model was then implemented in the development of algorithms for fusing imagery from several disparate sensors (visible and infrared). The model and the system as a whole were empirically evaluated in an experiment with fighter pilots in a simulated combat environment. The results show that the model is an accurate depiction of how humans interpret information from multiple disparate sensors, and that the algorithms show promise for assisting fighter pilots in quicker and more accurate target identification

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    NASA Automated Rendezvous and Capture Review. A compilation of the abstracts

    Get PDF
    This document presents a compilation of abstracts of papers solicited for presentation at the NASA Automated Rendezvous and Capture Review held in Williamsburg, VA on November 19-21, 1991. Due to limitations on time and other considerations, not all abstracts could be presented during the review. The organizing committee determined however, that all abstracts merited availability to all participants and represented data and information reflecting state-of-the-art of this technology which should be captured in one document for future use and reference. The organizing committee appreciates the interest shown in the review and the response by the authors in submitting these abstracts
    • …
    corecore