312 research outputs found

    Data-fused urban mobility applications for smart cities

    Get PDF
    Though vehicles are becoming more advanced with added safety feature technology, we must still rely on our own instincts and senses to make decisions. This thesis presents two applications that can be utilized by drivers, passengers, or pedestrians and allow a wider range of visibility during commutes. The first application uses the concept of see-through technology to assist the driver with a real-time augmented view of a traffic scene that in reality may be blocked by the vehicle in front. The second application presents a mobile application that utilizes two sources to gather the user\u27s location information, one using absolute location from a Global Positioning System (GPS) enabled device and the other from merging the concepts of computer vision, object detection, and mono-vision depth calculation, and place each instance of an identified object on the mapping application. Currently, mapping items such as stores, accidents, and traffic conditions are very common, but this application takes into account the location of individual users to give a holistic view of people instead of places

    A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges

    Full text link
    In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving into the interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain

    Feature-Guided Black-Box Safety Testing of Deep Neural Networks

    Full text link
    Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc.) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. These features are converted into a mutable saliency distribution, where high probability is assigned to pixels that affect the composition of the image with respect to the human visual system. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, where the first player's objective is to minimise the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random. We show that, theoretically, the two-player game can con- verge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. For Lipschitz networks, we also identify conditions that provide safety guarantees that no adversarial examples exist. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples. Our experiments show that, despite the black-box setting, manipulations guided by a perception-based saliency distribution are competitive with state-of-the-art methods that rely on white-box saliency matrices or sophisticated optimization procedures. Finally, we show how our method can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure

    Development of Automated Incident Detection System Using Existing ATMS CCTV

    Get PDF
    Indiana Department of Transportation (INDOT) has over 300 digital cameras along highways in populated areas in Indiana. These cameras are used to monitor traffic conditions around the clock, all year round. Currently, the videos from these cameras are observed by human operators. The main objective of this research is to develop an automatic real-time system to monitor traffic conditions using the INDOT CCTV video feeds by a collaborative research team of the Transportation Active Safety Institute (TASI) at Indiana University-Purdue University Indianapolis (IUPUI) and the Traffic Management Center (TMC) of INDOT. In this project, the research team developed the system architecture based on a detailed system requirement analysis. The first prototype of major system components of the system has been implemented. Specifically, the team has successfully accomplished the following: An AI based deep learning algorithm provided in YOLO3 is selected for vehicle detection which generates the best results for daytime videos. The tracking information of moving vehicles is used to derive the locations of roads and lanes. A database is designed as the center place to gather and distribute the information generated from all camera videos. The database provides all information for the traffic incident detection. A web-based Graphical User Interface (GUI) was developed. The automatic traffic incident detection will be implemented after the traffic flow information being derived accurately. The research team is currently in the process of integrating the prototypes of all components of the system together to establish a complete system prototype

    Real-time processing of high-resolution video and 3D model-based tracking for remote towers

    Get PDF
    High quality video data is a core component in emerging remote tower operations as it inherently contains a huge amount of information on which an air traffic controller can base decisions. Various digital technologies also have the potential to exploit this data to bring enhancements, including tracking ground movements by relating events in the video view to their positions in 3D space. The total resolution of remote tower setups with multiple cameras often exceeds 25 million RGB pixels and is captured at 30 frames per second or more. It is thus a challenge to efficiently process all the data in such a way as to provide relevant real-time enhancements to the controller. In this paper we discuss how a number of improvements can be implemented efficiently on a single workstation by decoupling processes and utilizing hardware for parallel computing. We also highlight how decoupling the processes in this way increases resilience of the software solution in the sense that failure of a single component does not impair the function of the other components

    SOTIF Entropy: Online SOTIF Risk Quantification and Mitigation for Autonomous Driving

    Full text link
    Autonomous driving confronts great challenges in complex traffic scenarios, where the risk of Safety of the Intended Functionality (SOTIF) can be triggered by the dynamic operational environment and system insufficiencies. The SOTIF risk is reflected not only intuitively in the collision risk with objects outside the autonomous vehicles (AVs), but also inherently in the performance limitation risk of the implemented algorithms themselves. How to minimize the SOTIF risk for autonomous driving is currently a critical, difficult, and unresolved issue. Therefore, this paper proposes the "Self-Surveillance and Self-Adaption System" as a systematic approach to online minimize the SOTIF risk, which aims to provide a systematic solution for monitoring, quantification, and mitigation of inherent and external risks. The core of this system is the risk monitoring of the implemented artificial intelligence algorithms within the AV. As a demonstration of the Self-Surveillance and Self-Adaption System, the risk monitoring of the perception algorithm, i.e., YOLOv5 is highlighted. Moreover, the inherent perception algorithm risk and external collision risk are jointly quantified via SOTIF entropy, which is then propagated downstream to the decision-making module and mitigated. Finally, several challenging scenarios are demonstrated, and the Hardware-in-the-Loop experiments are conducted to verify the efficiency and effectiveness of the system. The results demonstrate that the Self-Surveillance and Self-Adaption System enables dependable online monitoring, quantification, and mitigation of SOTIF risk in real-time critical traffic environments.Comment: 16 pages, 10 figures, 2 tables, submitted to IEEE TIT

    Accuracy vs. Energy: An Assessment of Bee Object Inference in Videos From On-Hive Video Loggers With YOLOv3, YOLOv4-Tiny, and YOLOv7-Tiny

    Get PDF
    A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive\u27s entrance. Since traffic at the hive\u27s entrance is a contributing factor to the hive\u27s productivity and health, we assessed the potential of three open-source convolutional network models, YOLOv3, YOLOv4-tiny, and YOLOv7-tiny, to quantify omnidirectional traffic in videos from on-hive video loggers on regular, unmodified one- and two-super Langstroth hives and compared their accuracies, energy efficacies, and operational energy footprints. We trained and tested the models with a 70/30 split on a dataset of 23,173 flying bees manually labeled in 5819 images from 10 randomly selected videos and manually evaluated the trained models on 3600 images from 120 randomly selected videos from different apiaries, years, and queen races. We designed a new energy efficacy metric as a ratio of performance units per energy unit required to make a model operational in a continuous hive monitoring data pipeline. In terms of accuracy, YOLOv3 was first, YOLOv7-tiny—second, and YOLOv4-tiny—third. All models underestimated the true amount of traffic due to false negatives. YOLOv3 was the only model with no false positives, but had the lowest energy efficacy and highest operational energy footprint in a deployed hive monitoring data pipeline. YOLOv7-tiny had the highest energy efficacy and the lowest operational energy footprint in the same pipeline. Consequently, YOLOv7-tiny is a model worth considering for training on larger bee datasets if a primary objective is the discovery of non-invasive computer vision models of traffic quantification with higher energy efficacies and lower operational energy footprints

    Deep learning based 3D object detection for automotive radar and camera fusion

    Get PDF
    La percepción en el dominio de los vehículos autónomos es una disciplina clave para lograr la automatización de los Sistemas Inteligentes de Transporte. Por ello, este Trabajo Fin de Máster tiene como objetivo el desarrollo de una técnica de fusión sensorial para RADAR y cámara que permita crear una representación del entorno enriquecida para la Detección de Objetos 3D mediante algoritmos Deep Learning. Para ello, se parte de la idea de PointPainting [1] y se adapta a un sensor en auge, el RADAR 3+1D, donde nube de puntos RADAR e información semántica de la cámara son agregadas para generar una representación enriquecida del entorno.Perception in the domain of autonomous vehicles is a key discipline to achieve the au tomation of Intelligent Transport Systems. Therefore, this Master Thesis aims to develop a sensor fusion technique for RADAR and camera to create an enriched representation of the environment for 3D Object Detection using Deep Learning algorithms. To this end, the idea of PointPainting [1] is used as a starting point and is adapted to a growing sensor, the 3+1D RADAR, in which the radar point cloud is aggregated with the semantic information from the camera.Máster Universitario en Ingeniería Industrial (M141

    Traffic Light and Back-light Recognition using Deep Learning and Image Processing with Raspberry Pi

    Get PDF
    Traffic light detection and back-light recognition are essential research topics in the area of intelligent vehicles because they avoid vehicle collision and provide driver safety. Improved detection and semantic clarity may aid in the prevention of traffic accidents by self-driving cars at crowded junctions, thus improving overall driving safety. Complex traffic situations, on the other hand, make it more difficult for algorithms to identify and recognize objects. The latest state-of-the-art algorithms based on Deep Learning and Computer Vision are successfully addressing the majority of real-time problems for autonomous driving, such as detecting traffic signals, traffic signs, and pedestrians. We propose a combination of deep learning and image processing methods while using the MobileNetSSD (deep neural network architecture) model with transfer learning for real-time detection and identification of traffic lights and back-light. This inference model is obtained from frameworks such as Tensor-Flow and Tensor-Flow Lite which is trained on the COCO data. This study investigates the feasibility of executing object detection on the Raspberry Pi 3B+, a widely used embedded computing board. The algorithm’s performance is measured in terms of frames per second (FPS), accuracy, and inference time
    • …
    corecore