4 research outputs found
Detection and Localization of Root Damages in Underground Sewer Systems using Deep Neural Networks and Computer Vision Techniques
Indiana University-Purdue University Indianapolis (IUPUI)The maintenance of a healthy sewer infrastructure is a major challenge due to the root damages from nearby plants that grow through pipe cracks or loose joints, which may lead to serious pipe blockages and collapse. Traditional inspections based on video surveillance to identify and localize root damages within such complex sewer networks are inefficient, laborious, and error-prone. Therefore, this study aims to develop a robust and efficient approach to automatically detect root damages and localize their circumferential and longitudinal positions in CCTV inspection videos by applying deep neural networks and computer vision techniques. With twenty inspection videos collected from various resources, keyframes were extracted from each video according to the difference in a LUV color space with certain selections of local maxima. To recognize distance information from video subtitles, OCR models such as Tesseract and CRNN-CTC were implemented and led to a 90% of recognition accuracy. In addition, a pre-trained segmentation model was applied to detect root damages, but it also found many false positive predictions. By applying a well-tuned YoloV3 model on the detection of pipe joints leveraging the Convex Hull Overlap (CHO) feature, we were able to achieve a 20% improvement on the reliability and accuracy of damage identifications. Moreover, an end-to-end deep learning pipeline that involved Triangle Similarity Theorem (TST) was successfully designed to predict the longitudinal position of each identified root damage. The prediction error was less than 1.0 feet
Ambient Intelligence for Next-Generation AR
Next-generation augmented reality (AR) promises a high degree of
context-awareness - a detailed knowledge of the environmental, user, social and
system conditions in which an AR experience takes place. This will facilitate
both the closer integration of the real and virtual worlds, and the provision
of context-specific content or adaptations. However, environmental awareness in
particular is challenging to achieve using AR devices alone; not only are these
mobile devices' view of an environment spatially and temporally limited, but
the data obtained by onboard sensors is frequently inaccurate and incomplete.
This, combined with the fact that many aspects of core AR functionality and
user experiences are impacted by properties of the real environment, motivates
the use of ambient IoT devices, wireless sensors and actuators placed in the
surrounding environment, for the measurement and optimization of environment
properties. In this book chapter we categorize and examine the wide variety of
ways in which these IoT sensors and actuators can support or enhance AR
experiences, including quantitative insights and proof-of-concept systems that
will inform the development of future solutions. We outline the challenges and
opportunities associated with several important research directions which must
be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the
Springer Handbook of the Metavers
Drone-based Computer Vision-Enabled Vehicle Dynamic Mobility and Safety Performance Monitoring
This report documents the research activities to develop a drone-based computer vision-enabled vehicle dynamic safety performance monitoring in Rural, Isolated, Tribal, or Indigenous (RITI) communities. The acquisition of traffic system information, especially the vehicle speed and trajectory information, is of great significance to the study of the characteristics and management of the traffic system in RITI communities. The traditional method of relying on video analysis to obtain vehicle number and trajectory information has its application scenarios, but the common video source is often a camera fixed on a roadside device. In the videos obtained in this way, vehicles are likely to occlude each other, which seriously affects the accuracy of vehicle detection and the estimation of speed. Although there are methods to obtain high-view road video by means of aircraft and satellites, the corresponding cost will be high. Therefore, considering that drones can obtain high-definition video at a higher viewing angle, and the cost is relatively low, we decided to use drones to obtain road videos to complete vehicle detection. In order to overcome the shortcomings of traditional object detection methods when facing a large number of targets and complex scenes of RITI communities, our proposed method uses convolutional neural network (CNN) technology. We modified the YOLO v3 network structure and used a vehicle data set captured by drones for transfer learning, and finally trained a network that can detect and classify vehicles in videos captured by drones. A self-calibrated road boundary extraction method based on image sequences was used to extract road boundaries and filter vehicles to improve the detection accuracy of cars on the road. Using the results of neural network detection as input, we use video-based object tracking to complete the extraction of vehicle trajectory information for traffic safety improvements. Finally, the number of vehicles, speed and trajectory information of vehicles were calculated, and the average speed and density of the traffic flow were estimated on this basis. By analyzing the acquiesced data, we can estimate the traffic condition of the monitored area to predict possible crashes on the highways
Drone-Based Computer Vision-Enabled Vehicle Dynamic Mobility and Safety Performance Monitoring
This report documents the research activities to develop a drone-based computer vision-enabled vehicle dynamic safety performance monitoring in Rural, Isolated, Tribal, or Indigenous (RITI) communities. The acquisition of traffic system information, especially the vehicle speed and trajectory information, is of great significance to the study of the characteristics and management of the traffic system in RITI communities. The traditional method of relying on video analysis to obtain vehicle number and trajectory information has its application scenarios, but the common video source is often a camera fixed on a roadside device. In the videos obtained in this way, vehicles are likely to occlude each other, which seriously affects the accuracy of vehicle detection and the estimation of speed. Although there are methods to obtain high-view road video by means of aircraft and satellites, the corresponding cost will be high. Therefore, considering that drones can obtain high-definition video at a higher viewing angle, and the cost is relatively low, we decided to use drones to obtain road videos to complete vehicle detection. In order to overcome the shortcomings of traditional object detection methods when facing a large number of targets and complex scenes of RITI communities, our proposed method uses convolutional neural network (CNN) technology. We modified the YOLO v3 network structure and used a vehicle data set captured by drones for transfer learning, and finally trained a network that can detect and classify vehicles in videos captured by drones. A self-calibrated road boundary extraction method based on image sequences was used to extract road boundaries and filter vehicles to improve the detection accuracy of cars on the road. Using the results of neural network detection as input, we use video-based object tracking to complete the extraction of vehicle trajectory information for traffic safety improvements. Finally, the number of vehicles, speed and trajectory information of vehicles were calculated, and the average speed and density of the traffic flow were estimated on this basis. By analyzing the acquiesced data, we can estimate the traffic condition of the monitored area to predict possible crashes on the highways