1,628 research outputs found

    Condition Assessment of Concrete Bridge Decks Using Ground and Airborne Infrared Thermography

    Get PDF
    Applications of nondestructive testing (NDT) technologies have shown promise in assessing the condition of existing concrete bridges. Infrared thermography (IRT) has gradually gained wider acceptance as a NDT and evaluation tool in the civil engineering field. The high capability of IRT in detecting subsurface delamination, commercial availability of infrared cameras, lower cost compared with other technologies, speed of data collection, and remote sensing are some of the expected benefits of applying this technique in bridge deck inspection practices. The research conducted in this thesis aims at developing a rational condition assessment system for concrete bridge decks based on IRT technology, and automating its analysis process in order to add this invaluable technique to the bridge inspector’s tool box. Ground penetrating radar (GPR) has also been vastly recognized as a NDT technique capable of evaluating the potential of active corrosion. Therefore, integrating IRT and GPR results in this research provides more precise assessments of bridge deck conditions. In addition, the research aims to establish a unique link between NDT technologies and inspector findings by developing a novel bridge deck condition rating index (BDCI). The proposed procedure captures the integrated results of IRT and GPR techniques, along with visual inspection judgements, thus overcoming the inherent scientific uncertainties of this process. Finally, the research aims to explore the potential application of unmanned aerial vehicle (UAV) infrared thermography for detecting hidden defects in concrete bridge decks. The NDT work in this thesis was conducted on full-scale deteriorated reinforced concrete bridge decks located in Montreal, Quebec and London, Ontario. The proposed models have been validated through various case studies. IRT, either from the ground or by utilizing a UAV with high-resolution thermal infrared imagery, was found to be an appropriate technology for inspecting and precisely detecting subsurface anomalies in concrete bridge decks. The proposed analysis produced thermal mosaic maps from the individual IR images. The k-means clustering classification technique was utilized to segment the mosaics and identify objective thresholds and, hence, to delineate different categories of delamination severity in the entire bridge decks. The proposed integration methodology of NDT technologies and visual inspection results provided more reliable BDCI. The information that was sought to identify the parameters affecting the integration process was gathered from bridge engineers with extensive experience and intuition. The analysis process utilized the fuzzy set theory to account for uncertainties and imprecision in the measurements of bridge deck defects detected by IRT and GPR testing along with bridge inspector observations. The developed system and models should stimulate wider acceptance of IRT as a rapid, systematic and cost-effective evaluation technique for detecting bridge deck delaminations. The proposed combination of IRT and GPR results should expand their correlative use in bridge deck inspection. Integrating the proposed BDCI procedure with existing bridge management systems can provide a detailed and timely picture of bridge health, thus helping transportation agencies in identifying critical deficiencies at various service life stages. Consequently, this can yield sizeable reductions in bridge inspection costs, effective allocation of limited maintenance and repair funds, and promote the safety, mobility, longevity, and reliability of our highway transportation assets

    Automatic Extraction of Vehicle, Bicycle, and Pedestrian Traffic From Video Data

    Get PDF
    SPR No. 742This project investigated the use of traffic cameras to count and classify vehicles. The intent is to provide an alternative approach to pneumatic tubes for collecting traffic data at high volume locations and to eliminate safety risks to SCDOT personnel and contractors. The objective is to develop algorithms to post-process the 48-hour videos to determine the number of vehicles in each one of four categories: motorcycles, passenger cars and light trucks, buses/campers/tow trucks, and small to large trucks. To this end, background subtraction and foreground detection algorithms were implemented to detect moving vehicles, and a Convolutional Neural Network (CNN) model was developed to classify vehicles using thermal images obtained from a custom-built thermal camera and solar-powered trailer. Additionally, to overcome false detection of vehicles due to either camera motion or erratic light reflection from the pavement surface, an algorithm was developed to keep track of each vehicle\u2019s trajectory and the vehicle trajectories were used to determine the presence of an actual vehicle. The developed algorithms and CNN model were incorporated into a Windows-based application, named DECAF (detection and classification by functional class) to enable users to easily specify the folder that contains the video files to be processed, specify the region for which traffic should be analyzed, specify the time interval for which the data should be aggregated, and view the detection and classification results in two report formats: 1) a spreadsheet with vehicle-by-vehicle information, and 2) a PDF summary report with totals aggregated for the user-specified interval. DECAF was tested using videos collected from five different sites in Columbia, SC, and the overall detection and classification accuracy for the hours evaluated was found to be 95% or higher

    Deep Learning based Models for Classification from Natural Language Processing to Computer Vision

    Get PDF
    With the availability of large scale data sets, researchers in many different areas such as natural language processing, computer vision, recommender systems have started making use of deep learning models and have achieved great progress in recent years. In this dissertation, we study three important classification problems based on deep learning models. First, with the fast growth of e-commerce, more people choose to purchase products online and browse reviews before making decisions. It is essential to build a model to identify helpful reviews automatically. Our work is inspired by the observation that a customer\u27s expectation of a review can be greatly affected by review sentiment and the degree to which the customer is aware of pertinent product information. To model such customer expectation and capture important information from a review text, we propose a novel neural network which encodes the sentiment of a review through an attention module, and introduces a product attention layer that fuses information from both the target product and related products. Our experimental results for the task of identifying whether a review is helpful or not show an AUC improvement of 5.4\% and 1.5\% over the previous state of the art model on Amazon and Yelp data sets, respectively. We further validate the effectiveness of each attention layer of our model in two application scenarios. The results demonstrate that both attention layers contribute to the model performance, and the combination of them has a synergistic effect. We also evaluate our model performance as a recommender system using three commonly used metrics: NDCG@10, Precision@10 and Recall@10. Our model outperforms PRH-Net, a state-of-the-art model, on all three of these metrics. Second, real-time bidding (RTB) that features per-impression-level real-time ad auctions has become a popular practice in today\u27s digital advertising industry. In RTB, click-through rate (CTR) prediction is a fundamental problem to ensure the success of an ad campaign and boost revenue. We present a dynamic CTR prediction model designed for the Samsung demand-side platform (DSP). We identify two key technical challenges that have not been fully addressed by the existing solutions: the dynamic nature of RTB and user information scarcity. To address both challenges, we develop a \ourmodel model. Our model effectively captures the dynamic evolutions of both users and ads and integrates auxiliary data sources (e.g., installed apps) to better model users\u27 preferences. We put forward a novel interaction layer that fuses both explicit user responses (e.g., clicks on ads) and auxiliary data sources to generate consolidated user preference representations. We evaluate our model using a large amount of data collected from the Samsung advertising platform and compare our method against several state-of-the-art methods that are likely suitable for real-world deployment. The evaluation results demonstrate the effectiveness of our method and the potential for production. Third, for Highway Performance Monitoring System (HPMS) purposes, the South Carolina Department of Transportation (SCDOT) must provide to the Federal Highway Administration (FHA) a classification of vehicles. However, due to limited lighting conditions at nighttime, classifying vehicles at nighttime is quite challenging. To solve this problem, we designed three CNN models to operate on thermal images. These three models have different architectures. Of these, model 2 achieves the best performance. Based on model 2, to avoid over-fitting and improve the performance further, we propose two training-test methods based on data augmentation technique. The experimental results demonstrate that the second training-test method improves the performance of model 2 further with regard to both accuracy and f1-score

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    Object detection, recognition and re-identification in video footage

    Get PDF
    There has been a significant number of security concerns in recent times; as a result, security cameras have been installed to monitor activities and to prevent crimes in most public places. These analysis are done either through video analytic or forensic analysis operations on human observations. To this end, within the research context of this thesis, a proactive machine vision based military recognition system has been developed to help monitor activities in the military environment. The proposed object detection, recognition and re-identification systems have been presented in this thesis. A novel technique for military personnel recognition is presented in this thesis. Initially the detected camouflaged personnel are segmented using a grabcut segmentation algorithm. Since in general a camouflaged personnel's uniform appears to be similar both at the top and the bottom of the body, an image patch is initially extracted from the segmented foreground image and used as the region of interest. Subsequently the colour and texture features are extracted from each patch and used for classification. A second approach for personnel recognition is proposed through the recognition of the badge on the cap of a military person. A feature matching metric based on the extracted Speed Up Robust Features (SURF) from the badge on a personnel's cap enabled the recognition of the personnel's arm of service. A state-of-the-art technique for recognising vehicle types irrespective of their view angle is also presented in this thesis. Vehicles are initially detected and segmented using a Gaussian Mixture Model (GMM) based foreground/background segmentation algorithm. A Canny Edge Detection (CED) stage, followed by morphological operations are used as pre-processing stage to help enhance foreground vehicular object detection and segmentation. Subsequently, Region, Histogram Oriented Gradient (HOG) and Local Binary Pattern (LBP) features are extracted from the refined foreground vehicle object and used as features for vehicle type recognition. Two different datasets with variant views of front/rear and angle are used and combined for testing the proposed technique. For night-time video analytics and forensics, the thesis presents a novel approach to pedestrian detection and vehicle type recognition. A novel feature acquisition technique named, CENTROG, is proposed for pedestrian detection and vehicle type recognition in this thesis. Thermal images containing pedestrians and vehicular objects are used to analyse the performance of the proposed algorithms. The video is initially segmented using a GMM based foreground object segmentation algorithm. A CED based pre-processing step is used to enhance segmentation accuracy prior using Census Transforms for initial feature extraction. HOG features are then extracted from the Census transformed images and used for detection and recognition respectively of human and vehicular objects in thermal images. Finally, a novel technique for people re-identification is proposed in this thesis based on using low-level colour features and mid-level attributes. The low-level colour histogram bin values were normalised to 0 and 1. A publicly available dataset (VIPeR) and a self constructed dataset have been used in the experiments conducted with 7 clothing attributes and low-level colour histogram features. These 7 attributes are detected using features extracted from 5 different regions of a detected human object using an SVM classifier. The low-level colour features were extracted from the regions of a detected human object. These 5 regions are obtained by human object segmentation and subsequent body part sub-division. People are re-identified by computing the Euclidean distance between a probe and the gallery image sets. The experiments conducted using SVM classifier and Euclidean distance has proven that the proposed techniques attained all of the aforementioned goals. The colour and texture features proposed for camouflage military personnel recognition surpasses the state-of-the-art methods. Similarly, experiments prove that combining features performed best when recognising vehicles in different views subsequent to initial training based on multi-views. In the same vein, the proposed CENTROG technique performed better than the state-of-the-art CENTRIST technique for both pedestrian detection and vehicle type recognition at night-time using thermal images. Finally, we show that the proposed 7 mid-level attributes and the low-level features results in improved performance accuracy for people re-identification

    Muscle temperature analysis, using thermal imaging, applied to the treatment of muscle recovery

    Get PDF
    The images help in the different processes where a visual interpretation of a scene is required, in this sense we find many applications where images are used to analyze, interpret and classify certain objects within the image, there are different types of images generated by different sensors, in this paper describes a method to analyze the behavior of the muscle, mainly of the knee, when performing rehabilitation exercises, coupled with an optical image where you can see the state of the muscle and the location, the method proposed as a super position between optical and thermal images, with the intention of being able to know the state of the optical image and to have the same image with information of the behavior of the temperature, the super position that we propose is to have as a base the optical image and on placing the thermal image, the results that are presented are oriented in proposing a new way of analyzing data with thermal information of the behavior of the muscles, by means of a complex image with optical and thermal information, the method is an aid in the treatment of muscular recovery, with the benefits of being scalable and applicable to other muscles and parts of the human body

    Land use, urban, environmental, and cartographic applications, chapter 2, part D

    Get PDF
    Microwave data and its use in effective state, regional, and national land use planning are dealt with. Special attention was given to monitoring land use change, especially dynamic components, and the interaction between land use and dynamic features of the environment. Disaster and environmental monitoring are also discussed

    Wide area detection system: Conceptual design study

    Get PDF
    An integrated sensor for traffic surveillance on mainline sections of urban freeways is described. Applicable imaging and processor technology is surveyed and the functional requirements for the sensors and the conceptual design of the breadboard sensors are given. Parameters measured by the sensors include lane density, speed, and volume. The freeway image is also used for incident diagnosis

    Deep visible and thermal image fusion for enhanced pedestrian visibility

    Get PDF
    Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics
    • …
    corecore