950 research outputs found

    Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles.

    Get PDF
    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach

    Flexible Stereo: Constrained, Non-rigid, Wide-baseline Stereo Vision for Fixed-wing Aerial Platforms

    Full text link
    This paper proposes a computationally efficient method to estimate the time-varying relative pose between two visual-inertial sensor rigs mounted on the flexible wings of a fixed-wing unmanned aerial vehicle (UAV). The estimated relative poses are used to generate highly accurate depth maps in real-time and can be employed for obstacle avoidance in low-altitude flights or landing maneuvers. The approach is structured as follows: Initially, a wing model is identified by fitting a probability density function to measured deviations from the nominal relative baseline transformation. At run-time, the prior knowledge about the wing model is fused in an Extended Kalman filter~(EKF) together with relative pose measurements obtained from solving a relative perspective N-point problem (PNP), and the linear accelerations and angular velocities measured by the two inertial measurement units (IMU) which are rigidly attached to the cameras. Results obtained from extensive synthetic experiments demonstrate that our proposed framework is able to estimate highly accurate baseline transformations and depth maps.Comment: Accepted for publication in IEEE International Conference on Robotics and Automation (ICRA), 2018, Brisban

    Leveraging Aruco Fiducial Marker System for Bridge Displacement Estimation Using Unmanned Aerial Vehicles

    Get PDF
    The use of unmanned aerial vehicles (UAVs) in construction sites has been widely growing for surveying and inspection purposes. Their mobility and agility have enabled engineers to use UAVs in Structural Health Monitoring (SHM) applications to overcome the limitations of traditional approaches that require labor-intensive installation, extended time, and long-term maintenance. One of the critical applications of SHM is measuring bridge deflections during the bridge operation period. Due to the complex remote sites of bridges, remote sensing techniques, such as camera-equipped drones, can facilitate measuring bridge deflections. This work takes a step to build a pipeline using the state-of-the-art computer vision ArUco framework to detect and track ArUco tags placed on the area of interest. The proposed pipeline analyzes videos of tags captured by stationary cameras and camera-equipped UAVs to return the displacements of tags. This work provides experiments of the ArUco pipeline with stationary and dynamic camera platforms in controlled environments. Estimated displacements are then compared with ground truth data. Experiments show the significance of pixel resolution, platform stability, and camera resolution in achieving high accuracy estimation. Results demonstrate that the ArUco pipeline outperforms existing methods with stationary cameras, reaching an accuracy of 95.7%. Moreover, the pipeline introduces an approach to eliminating the noised cause drone’s motion using a static reference tag. This technique has yielded an accuracy of 90.1%. This work shows promise toward a completely targetless approach using computer vision and camera-equipped drones. Advisor: Carrick Detweile

    Autonomous Vehicles

    Get PDF
    This edited volume, Autonomous Vehicles, is a collection of reviewed and relevant research chapters, offering a comprehensive overview of recent developments in the field of vehicle autonomy. The book comprises nine chapters authored by various researchers and edited by an expert active in the field of study. All chapters are complete in itself but united under a common research study topic. This publication aims to provide a thorough overview of the latest research efforts by international authors, open new possible research paths for further novel developments, and to inspire the younger generations into pursuing relevant academic studies and professional careers within the autonomous vehicle field

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Control System Development for small UAV Gimbal

    Get PDF
    The design process of unmanned ISR systems has typically driven in the direction of increasing system mass to increase stabilization performance and imagery quality. However, through the use of new sensor and processor technology high performance stabilization feedback is being made available for control on new small and low mass stabilized platforms that can be placed on small UAVs. This project develops and implements a LOS stabilization controller design, typically seen on larger gimbals, onto a new small stabilized gimbal, the Tigereye, and demonstrates the application on several small UAV aircraft. The Tigereye gimbal is a new 2lb, 2-axis, gimbal intended to provided high performance closed loop LOS stabilization through the utilization of inertial rate gyro, electronic video stabilization, and host platform state information. Ground and flight tests results of the LOS stabilization controller on the Tigereye gimbal have shown stabilization performance improvements over legacy systems. However, system characteristics identified in testing still limit stabilization performance, these include: host system vibration, gimbal joint friction and backlash, joint actuation compliance, payload CG asymmetry, and gyro noise and drift. The control system design has been highly modularized in anticipation of future algorithm and hardware upgrades to address the remaining issues and extend the system\u27s capabilities

    Learning, Moving, And Predicting With Global Motion Representations

    Get PDF
    In order to effectively respond to and influence the world they inhabit, animals and other intelligent agents must understand and predict the state of the world and its dynamics. An agent that can characterize how the world moves is better equipped to engage it. Current methods of motion computation rely on local representations of motion (such as optical flow) or simple, rigid global representations (such as camera motion). These methods are useful, but they are difficult to estimate reliably and limited in their applicability to real-world settings, where agents frequently must reason about complex, highly nonrigid motion over long time horizons. In this dissertation, I present methods developed with the goal of building more flexible and powerful notions of motion needed by agents facing the challenges of a dynamic, nonrigid world. This work is organized around a view of motion as a global phenomenon that is not adequately addressed by local or low-level descriptions, but that is best understood when analyzed at the level of whole images and scenes. I develop methods to: (i) robustly estimate camera motion from noisy optical flow estimates by exploiting the global, statistical relationship between the optical flow field and camera motion under projective geometry; (ii) learn representations of visual motion directly from unlabeled image sequences using learning rules derived from a formulation of image transformation in terms of its group properties; (iii) predict future frames of a video by learning a joint representation of the instantaneous state of the visual world and its motion, using a view of motion as transformations of world state. I situate this work in the broader context of ongoing computational and biological investigations into the problem of estimating motion for intelligent perception and action
    • …
    corecore