176,980 research outputs found

    Transfer Learning-Based Crack Detection by Autonomous UAVs

    Full text link
    Unmanned Aerial Vehicles (UAVs) have recently shown great performance collecting visual data through autonomous exploration and mapping in building inspection. Yet, the number of studies is limited considering the post processing of the data and its integration with autonomous UAVs. These will enable huge steps onward into full automation of building inspection. In this regard, this work presents a decision making tool for revisiting tasks in visual building inspection by autonomous UAVs. The tool is an implementation of fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack detection. It offers an optional mechanism for task planning of revisiting pinpoint locations during inspection. It is integrated to a quadrotor UAV system that can autonomously navigate in GPS-denied environments. The UAV is equipped with onboard sensors and computers for autonomous localization, mapping and motion planning. The integrated system is tested through simulations and real-world experiments. The results show that the system achieves crack detection and autonomous navigation in GPS-denied environments for building inspection

    Video-based motion detection for stationary and moving cameras

    Get PDF
    In real world monitoring applications, moving object detection remains to be a challenging task due to factors such as background clutter and motion, illumination variations, weather conditions, noise, and occlusions. As a fundamental first step in many computer vision applications such as object tracking, behavior understanding, object or event recognition, and automated video surveillance, various motion detection algorithms have been developed ranging from simple approaches to more sophisticated ones. In this thesis, we present two moving object detection frameworks. The first framework is designed for robust detection of moving and static objects in videos acquired from stationary cameras. This method exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison. This hybrid system can handle challenges such as shadows, illumination changes, dynamic background, stopped and removed objects. Extensive testing performed on the CVPR 2014 Change Detection benchmark dataset shows that FTSG outperforms most state-of-the-art methods. The second framework adapts moving object detection to full motion videos acquired from moving airborne platforms. This framework has two main modules. The first module stabilizes the video with respect to a set of base-frames in the sequence. The stabilization is done by estimating four-point homographies using prominent feature (PF) block matching, motion filtering and RANSAC for robust matching. Once the frame to base frame homographies are available the flux tensor motion detection module using local second derivative information is applied to detect moving salient features. Spurious responses from the frame boundaries and other post- processing operations are applied to reduce the false alarms and produce accurate moving blob regions that will be useful for tracking

    Multi-function RF for Situational Awareness

    Get PDF
    Radio frequency (RF) communications are an integral part of many situational awareness applications. Sensing data need to be processed in a timely manner, making it imperative to have a robust and reliable RF link for information dissemination. Moreover, there is an increasing need for exploiting RF communication signals directly for sensing, leading to the notion of multi-function RF. In the first part of this dissertation, we investigate the development of a robust Multiple-Input Multiple-Output (MIMO) communication system suitable for airborne platforms.Three majors challenges in realizing MIMO capacity gain in airborne environment are addressed: 1) antenna blockage due largely to the orientation of the antenna array; 2) the presence of unknown interference inherent to the intended application; 3) the lack of channel state information (CSI) at the transmitter. Built on the Diagonal Bell-Labs Layered Space-Time (D-BLAST) MIMO architecture, the system integrates three key design approaches: spatial spreading to counter antenna blockage; temporal spreading to mitigate signal to interference and noise ratio degradation due to intended or unintended interference; and a simple low rate feedback scheme to enable real time adaptation in the absence of full transmitter CSI. Extensive experiment studies using a fully functioning 4×44\times 4 MIMO system validate the developed system. In the second part, ambient RF signals are exploited to extract situational awareness information directly. Using WiFi signals as an example, we demonstrate that the CSI obtained at the receiver contains rich information about the propagation environment. Two distinct learning systems are developed for occupancy detection using passive WiFi sensing. The first one is based on deep learning where a parallel convolutional neural network (CNN) architecture is designed to extract useful information from both magnitude and phase of the CSI. Pre-processing steps are carefully designed to preserve human motion induced channel variation while insulating against other impairments and post-processing is applied after CNN to infer presence information for instantaneous motion outputs. To alleviate the need of tedious training efforts involved in deep learning based system, a novel learning problem with contaminated sampling is formulated. This leads to a second learning system: a two-stage solution for motion detection using support vector machines (SVM). A one-class SVM model is first evaluated whose training data are from human free environment only. Decontamination of human presence data using the one-class SVM is done prior to motion detection through a two-class support vector classifier. Extensive experiments using commercial off-the-shelf WiFi devices are conducted for both systems. The results demonstrate that the learning based RF sensing provides a viable and promising alternative for occupancy detection as they are much more sensitive to human motion than passive infrared sensors which are widely deployed in commercial and residential buildings

    The Pan-STARRS Moving Object Processing System

    Full text link
    We describe the Pan-STARRS Moving Object Processing System (MOPS), a modern software package that produces automatic asteroid discoveries and identifications from catalogs of transient detections from next-generation astronomical survey telescopes. MOPS achieves > 99.5% efficiency in producing orbits from a synthetic but realistic population of asteroids whose measurements were simulated for a Pan-STARRS4-class telescope. Additionally, using a non-physical grid population, we demonstrate that MOPS can detect populations of currently unknown objects such as interstellar asteroids. MOPS has been adapted successfully to the prototype Pan-STARRS1 telescope despite differences in expected false detection rates, fill-factor loss and relatively sparse observing cadence compared to a hypothetical Pan-STARRS4 telescope and survey. MOPS remains >99.5% efficient at detecting objects on a single night but drops to 80% efficiency at producing orbits for objects detected on multiple nights. This loss is primarily due to configurable MOPS processing limits that are not yet tuned for the Pan-STARRS1 mission. The core MOPS software package is the product of more than 15 person-years of software development and incorporates countless additional years of effort in third-party software to perform lower-level functions such as spatial searching or orbit determination. We describe the high-level design of MOPS and essential subcomponents, the suitability of MOPS for other survey programs, and suggest a road map for future MOPS development.Comment: 57 Pages, 26 Figures, 13 Table

    Monitoring muscle fatigue following continuous load changes

    Get PDF
    Department of Human Factors EngineeringPrevious studies related to monitoring muscle fatigue during dynamic motion have focused on detecting the accumulation of muscle fatigue. However, it is necessary to detect both accumulation and recovery of muscle fatigue in dynamic muscle contraction while muscle load changes continuously. This study aims to investigate the development and recovery of muscle fatigue in dynamic muscle contraction conditions following continuous load changes. Twenty healthy males conducted repetitive elbow flexion and extension using 2kg and 1kg dumbbell, by turns. They performed the two tasks of different intensity (2kg intensity task, 1kg intensity task) alternately until they felt they could no longer achieve the required movement range or until they experienced unacceptable biceps muscle discomfort. Meanwhile, using EMG signal of biceps brachii muscle, fatigue detections were performed from both dynamic measurements during each dynamic muscle contraction task and isometric measurements during isometric muscle contraction right before and after each task. In each of 2kg and 1kg intensity tasks, pre, post and change value of EMG amplitude (AEMG) and center frequency were computed respectively. They were compared to check the validity of the muscle fatigue monitoring method using Wavelet transform with EMG signal from dynamic measurements. As a result, a decrease of center frequency in 2kg intensity tasks and an increase of center frequency in 1kg intensity tasks were detected. It shows that development and recovery of muscle fatigue were detected in 2kg and 1kg intensity tasks, respectively. Also, the tendency of change value of center frequency from dynamic measurements were corresponded with that from isometric measurements. It suggests that monitoring muscle fatigue in dynamic muscle contraction conditions using wavelet transform was valid to detect the development and recovery of muscle fatigue continuously. The result also shows the possibility of monitoring muscle fatigue in real-time in industry and it could propose a guideline in designing a human-robot interaction system based on monitoring user's muscle fatigue.clos

    On Rendering Synthetic Images for Training an Object Detector

    Get PDF
    We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available

    Precise motion descriptors extraction from stereoscopic footage using DaVinci DM6446

    Get PDF
    A novel approach to extract target motion descriptors in multi-camera video surveillance systems is presented. Using two static surveillance cameras with partially overlapped field of view (FOV), control points (unique points from each camera) are identified in regions of interest (ROI) from both cameras footage. The control points within the ROI are matched for correspondence and a meshed Euclidean distance based signature is computed. A depth map is estimated using disparity of each control pair and the ROI is graded into number of regions with the help of relative depth information of the control points. The graded regions of different depths will help calculate accurately the pace of the moving target and also its 3D location. The advantage of estimating a depth map for background static control points over depth map of the target itself is its accuracy and robustness to outliers. The performance of the algorithm is evaluated in the paper using several test sequences. Implementation issues of the algorithm onto the TI DaVinci DM6446 platform are considered in the paper
    corecore