755 research outputs found

    Ambulance detection for smart traffic light applications with fuzzy controller

    Get PDF
    In the development of intelligent cities, the automation of vehicular mobility is one of the strong points of research, where intelligent traffic lights stand out. It is essential in this field to prioritize emergency vehicles that can help save lives, where every second counts in favor of the transfer of a patient or injured person. This paper presents an artificial intelligence algorithm based on two stages, one is the recognition of emergency vehicles through a ResNet-50 and the other is a fuzzy inference system for timing control of a traffic light, both lead to an intelligent traffic light. An application of traffic light vehicular flow control for automatic preemption when detecting emergency vehicles, specifically ambulances, is oriented. The training parameters of the network, which achieves 100% accuracy with confidence levels between 65% with vehicle occlusion and 99% in direct view, are presented. The traffic light cycles are able to extend the green time of the traffic light with almost 50% in favor of the road that must yield the priority, in relation to not using the fuzzy inference system

    Using Prior Knowledge for Verification and Elimination of Stationary and Variable Objects in Real-time Images

    Get PDF
    With the evolving technologies in the autonomous vehicle industry, now it has become possible for automobile passengers to sit relaxed instead of driving the car. Technologies like object detection, object identification, and image segmentation have enabled an autonomous car to identify and detect an object on the road in order to drive safely. While an autonomous car drives by itself on the road, the types of objects surrounding the car can be dynamic (e.g., cars and pedestrians), stationary (e.g., buildings and benches), and variable (e.g., trees) depending on if the location or shape of an object changes or not. Different from the existing image-based approaches to detect and recognize objects in the scene, in this research 3D virtual world is employed to verify and eliminate stationary and variable objects to allow the autonomous car to focus on dynamic objects that may cause danger to its driving. This methodology takes advantage of prior knowledge of stationary and variable objects presented in a virtual city and verifies their existence in a real-time scene by matching keypoints between the virtual and real objects. In case of a stationary or variable object that does not exist in the virtual world due to incomplete pre-existing information, this method uses machine learning for object detection. Verified objects are then removed from the real-time image with a combined algorithm using contour detection and class activation map (CAM), which helps to enhance the efficiency and accuracy when recognizing moving objects

    The Use of Deep Learning Methods to Assist in the Classification of Seismically Vulnerable Dwellings

    Get PDF
    Exciting research is being conducted using Google\u27s Street View imagery. Researchers can have access to training data that allows CNN training for topics ranging from assessing neighborhood environments to estimating the age of a building. However, due to the uncontrolled nature of imagery available via Google\u27s Street View API, data collection can be lengthy and tedious. In an effort to help researchers gather address specific dwelling images efficiently, we developed an innovative and novel way of automatically performing this task. It was accomplished by exploiting Google\u27s publicly available platform with a combination of 3 separate network types and post-processing techniques. Our uniquely developed non-maximum suppression (NMS) strategy helped achieve 99.4%, valid, address specific, dwelling images. We explored the efficacy of utilizing our newly developed mechanism to train a CNN on Unreinforced Masonry (URM) buildings. We made this selection because building collapse during an earthquake account for majority of the deaths during a disaster of this kind. An automated approach for identifying seismically vulnerable buildings using street level imagery has been met with limited success to this point with no promising results presented in the literature. We have been able to achieve the best accuracy reported to date, at 83.63%, in identifying URM, finished URM, and non-URM buildings using manually curated images. We performed an ablation study to establish synergistic parameters on ResNeXt-101-FixRes. We also present a visualization the first layer of the network to ascertain and demonstrate how a deep learning network can distinguish between various types of URM buildings. Lastly, we establish the value of our automatically generated data set for these building types by achieving an accuracy of 84.91%. This is higher than the accuracy achieved using our hand curated data set of 83.63%

    Detecting and Evaluating Cracks on Aging Concrete Members with Deep Convolutional Neural Networks

    Get PDF
    Cracks in concrete structures are evaluated through a timely and subjective manual inspection. The location of cracks is often recorded in an inspection report where some cracks are measured. Although measurements or locations may not be necessary for all cracks observed in concrete members, if quantitative data can be gathered in an autonomous way, allowing measurement data to be used in tracking changes in spatial and temporal scales, this quantitative data can provide useful information not yet captured in the manual inspection process. This thesis aims to construct an image-based crack detection and evaluation pipeline that can assist health monitoring of aging concrete structures, by providing crack locations and measured crack properties for the entire structural member. Over 16,000 images of aging concrete bridge deck were collected from cameras attached on an unmanned aerial vehicle, machine vision cameras attached on a ground vehicle, and other literature. Mask and Region based Convolutional Neural Network (Mask R-CNN) was utilized to train 256 by 256-pixel patches of collected images using three distinct training strategies to detect and segment concrete cracks on bridge decks. Resulting crack masks were translated into binary data (crack or non-crack pixels) and skeletons of the mask were created where the Euclidean distance from the center of the skeleton to the edge of the mask were measured. This allowed to calculate the relative crack width, length, and orientation of each detected crack. Relative crack properties were transformed into real-world unites using the ground sampling distance of the host image. Image patches were then compiled to construct a crack map of the entire structural member. A case study was conducted on the deck and pier of an aging concrete bridge to test the robustness of the proposed data pipeline. The study yielded that the model was able to successfully detect cracks with an average width of 0.020 inches and were able to make accurate measurements of crack widths that are larger than 0.080 inches. In order to improve the measurements for smaller crack widths, the ground sampling distance needs to be to the scale of the crack width in interest. The image-based data pipeline developed in this study demonstrates potential for the application in autonomous inspections of concrete members. In addition, the data pipeline can be used as a reference framework to provide an example on how computer-vision based data analytics can provide useful information for structural inspections of aging concrete members. Advisor: Chungwook Si
    corecore