56 research outputs found

    Automating Intersection Marking Data Collection and Condition Assessment at Scale With An Artificial Intelligence-Powered System

    Get PDF
    Intersection markings play a vital role in providing road users with guidance and information. The conditions of intersection markings will be gradually degrading due to vehicular traffic, rain, and/or snowplowing. Degraded markings can confuse drivers, leading to increased risk of traffic crashes. Timely obtaining high-quality information of intersection markings lays a foundation for making informed decisions in safety management and maintenance prioritization. However, current labor-intensive and high-cost data collection practices make it very challenging to gather intersection data on a large scale. This paper develops an automated system to intelligently detect intersection markings and to assess their degradation conditions with existing roadway Geographic information systems (GIS) data and aerial images. The system harnesses emerging artificial intelligence (AI) techniques such as deep learning and multi-task learning to enhance its robustness, accuracy, and computational efficiency. AI models were developed to detect lane-use arrows (85% mean average precision) and crosswalks (89% mean average precision) and to assess the degradation conditions of markings (91% overall accuracy for lane-use arrows and 83% for crosswalks). Data acquisition and computer vision modules developed were integrated and a graphical user interface (GUI) was built for the system. The proposed system can fully automate the processes of marking data collection and condition assessment on a large scale with almost zero cost and short processing time. The developed system has great potential to propel urban science forward by providing fundamental urban infrastructure data for analysis and decision-making across various critical areas such as data-driven safety management and prioritization of infrastructure maintenance

    The impact of collarette region-based convolutional neural network for iris recognition

    Get PDF
    Iris recognition is a biometric technique that reliably and quickly recognizes a person by their iris based on unique biological characteristics. Iris has an exceptional structure and it provides very rich feature spaces as freckles, stripes, coronas, zigzag collarette area, etc. It has many features where its growing interest in biometric recognition lies. This paper proposes an improved iris recognition method for person identification based on Convolutional Neural Networks (CNN) with an improved recognition rate based on a contribution on zigzag collarette area - the area surrounding the pupil - recognition. Our work is in the field of biometrics especially iris recognition; the iris recognition rate using the full circle of the zigzag collarette was compared with the detection rate using the lower semicircle of the zigzag collarette. The classification of the collarette is based on the Alex-Net model to learn this feature, the use of the couple (collarette/CNN) allows for noiseless and more targeted characterization and also an automatic extraction of the lower semicircle of the collarette region, finally, the SVM training model is used for classification using grayscale eye image data taken from (CASIA-iris-V4) database. The experimental results show that our contribution proves to be the best accurate, because the CNN can effectively extract the image features with higher classification accuracy and because our new method, which uses the lower semicircle of the collarette region, achieved the highest recognition accuracy compared with the old methods that use the full circle of collarette region

    Mobile Phone-Based Artificial Intelligence Development for Maintenance Asset Management

    Get PDF
    22-8099Transportation asset management needs timely information collection to inform relevant maintenance practices (e.g., resource planning). Traditional data collection methods in transportation asset management require either manual operation or support of unique equipment (e.g., Light Detection and Ranging (LiDAR)), which could be labor-intensive or costly to implement. With the advancement of computing techniques, artificial intelligence (AI) has been developed to be capable of automatically detecting objects in images and videos. In this project, we developed accurate and efficient AI algorithms to automatically collect and analyze transportation asset status, including identification of pavement marking issues, traffic signs, litter & trash, and steel guardrails & concrete barriers. The AI algorithms were developed based on the You Only Look Once (YOLO) framework built on Convolution Neural Network as the deep learning algorithms. Specifically, a smartphone was mounted on the vehicle\u2019s front windshield to collect videos of transportation assets on both highways and local roads. These videos were then converted and processed into labeled images to be training and test datasets for AI algorithm training. Then, AI models were developed for automatic object detection of the listed transportation assets above. The results demonstrate that the developed AI models achieve good performance in identifying targeted objects with over 85% accuracy. The developed AI package is expected to enable timely and efficient information collection of transportation assets, hence, improving road safety

    Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges

    Get PDF
    The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system. Document type: Articl

    Novel Aggregated Solutions for Robust Visual Tracking in Traffic Scenarios

    Get PDF
    This work proposes novel approaches for object tracking in challenging scenarios like severe occlusion, deteriorated vision and long range multi-object reidentification. All these solutions are only based on image sequence captured by a monocular camera and do not require additional sensors. Experiments on standard benchmarks demonstrate an improved state-of-the-art performance of these approaches. Since all the presented approaches are smartly designed, they can run at a real-time speed

    3D Information Technologies in Cultural Heritage Preservation and Popularisation

    Get PDF
    This Special Issue of the journal Applied Sciences presents recent advances and developments in the use of digital 3D technologies to protect and preserve cultural heritage. While most of the articles focus on aspects of 3D scanning, modeling, and presenting in VR of cultural heritage objects from buildings to small artifacts and clothing, part of the issue is devoted to 3D sound utilization in the cultural heritage field

    Design of autonomous sustainable unmanned aerial vehicle - A novel approach to its dynamic wireless power transfer

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Electric UAVs are presently being used widely in civilian duties such as security, surveillance, and disaster relief. The use of Unmanned Aerial Vehicle (UAV) has increased dramatically over the past years in different areas/fields such as marines, mountains, wild environments. Nowadays, there are many electric UAVs development with fast computational speed and autonomous flying has been a reality by fusing many sensors such as camera tracking sensor, obstacle avoiding sensor, radar sensor, etc. But there is one main problem still not able to overcome which is power requirement for continuous autonomous operation. When the operation needs more power, but batteries can only give for 20 to 30 mins of flight time. These types of system are not reliable for long term civilian operation because we need to recharge or replace batteries by landing the craft every time when we want to continue the operation. The large batteries also take more loads on the UAV which is also not a reliable system. To eliminate these obstacles, there should a recharging wireless power station in ground which can transmit power to these small UAVs wirelessly for long term operation. There will be camera attached in the drone to detect and hover above the Wireless Power Transfer device which got receiving and transmitting station can be use with deep learning and sensor fusion techniques for more reliable flight operations. This thesis explores the use of dynamic wireless power to transfer energy using novel rotating WPT charging technique to the UAV with improved range, endurance, and average speed by giving extra hours in the air. The hypothesis that was created has a broad application beyond UAVs. The drone autonomous charging was mostly done by detecting a rotating WPT receiver connected to main power outlet that served as a recharging platform using deep neural vision capabilities. It was the purpose of the thesis to provide an alternative to traditional self-charging systems that relies purely on static WPT method and requires little distance between the vehicle and receiver. When the UAV camera detect the WPT receiving station, it will try to align and hover using onboard sensors for best power transfer efficiency. Since this strategy relied on traditional automatic drone landing technique, but the target is rotating all the time which needs smart approaches like deep learning and sensor fusion. The simulation environment was created and tested using robot operating system on a Linux operating system using a model of the custom-made drone. Experiments on the charging of the drone confirmed that the intelligent dynamic wireless power transfer (DWPT) method worked successfully while flying on air
    corecore