3 research outputs found

    Vision-Based Control of Unmanned Aerial Vehicles for Automated Structural Monitoring and Geo-Structural Analysis of Civil Infrastructure Systems

    Full text link
    The emergence of wireless sensors capable of sensing, embedded computing, and wireless communication has provided an affordable means of monitoring large-scale civil infrastructure systems with ease. To date, the majority of the existing monitoring systems, including those based on wireless sensors, are stationary with measurement nodes installed without an intention for relocation later. Many monitoring applications involving structural and geotechnical systems require a high density of sensors to provide sufficient spatial resolution to their assessment of system performance. While wireless sensors have made high density monitoring systems possible, an alternative approach would be to empower the mobility of the sensors themselves to transform wireless sensor networks (WSNs) into mobile sensor networks (MSNs). In doing so, many benefits would be derived including reducing the total number of sensors needed while introducing the ability to learn from the data obtained to improve the location of sensors installed. One approach to achieving MSNs is to integrate the use of unmanned aerial vehicles (UAVs) into the monitoring application. UAV-based MSNs have the potential to transform current monitoring practices by improving the speed and quality of data collected while reducing overall system costs. The efforts of this study have been chiefly focused upon using autonomous UAVs to deploy, operate, and reconfigure MSNs in a fully autonomous manner for field monitoring of civil infrastructure systems. This study aims to overcome two main challenges pertaining to UAV-enabled wireless monitoring: the need for high-precision localization methods for outdoor UAV navigation and facilitating modes of direct interaction between UAVs and their built or natural environments. A vision-aided UAV positioning algorithm is first introduced to augment traditional inertial sensing techniques to enhance the ability of UAVs to accurately localize themselves in a civil infrastructure system for placement of wireless sensors. Multi-resolution fiducial markers indicating sensor placement locations are applied to the surface of a structure, serving as navigation guides and precision landing targets for a UAV carrying a wireless sensor. Visual-inertial fusion is implemented via a discrete-time Kalman filter to further increase the robustness of the relative position estimation algorithm resulting in localization accuracies of 10 cm or smaller. The precision landing of UAVs that allows the MSN topology change is validated on a simple beam with the UAV-based MSN collecting ambient response data for extraction of global mode shapes of the structure. The work also explores the integration of a magnetic gripper with a UAV to drop defined weights from an elevation to provide a high energy seismic source for MSNs engaged in seismic monitoring applications. Leveraging tailored visual detection and precise position control techniques for UAVs, the work illustrates the ability of UAVs to—in a repeated and autonomous fashion—deploy wireless geophones and to introduce an impulsive seismic source for in situ shear wave velocity profiling using the spectral analysis of surface waves (SASW) method. The dispersion curve of the shear wave profile of the geotechnical system is shown nearly equal between the autonomous UAV-based MSN architecture and that taken by a traditional wired and manually operated SASW data collection system. The developments and proof-of-concept systems advanced in this study will extend the body of knowledge of robot-deployed MSN with the hope of extending the capabilities of monitoring systems while eradicating the need for human interventions in their design and use.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169980/1/zhh_1.pd

    Function Embeddings for Multi-modal Bayesian Inference

    Get PDF
    Tractable Bayesian inference is a fundamental challenge in robotics and machine learning. Standard approaches such as Gaussian process regression and Kalman filtering make strong Gaussianity assumptions about the underlying distributions. Such assumptions, however, can quickly break down when dealing with complex systems such as the dynamics of a robot or multi-variate spatial models. In this thesis we aim to solve Bayesian regression and filtering problems without making assumptions about the underlying distributions. We develop techniques to produce rich posterior representations for complex, multi-modal phenomena. Our work extends kernel Bayes' rule (KBR), which uses empirical estimates of distributions derived from a set of training samples and embeds them into a high-dimensional reproducing kernel Hilbert space (RKHS). Bayes' rule itself occurs on elements of this space. Our first contribution is the development of an efficient method for estimating posterior density functions from kernel Bayes' rule, applied to both filtering and regression. By embedding fixed-mean mixtures of component distributions, we can efficiently find an approximate pre-image by optimising the mixture weights using a convex quadratic program. The result is a complex, multi-modal posterior representation. Our next contributions are methods for estimating cumulative distributions and quantile estimates from the posterior embedding of kernel Bayes' rule. We examine a number of novel methods, including those based on our density estimation techniques, as well as directly estimating the cumulative through use of the reproducing property of RKHSs. Finally, we develop a novel method for scaling kernel Bayes' rule inference to large datasets, using a reduced-set construction optimised using the posterior likelihood. This method retains the ability to perform multi-output inference, as well as our earlier contributions to represent explicitly non-Gaussian posteriors and quantile estimates

    Simulation-Based and Data-Driven Approaches to Industrial Digital Twinning Towards Autonomous Smart Manufacturing Systems

    Get PDF
    A manufacturing paradigm shift from conventional control pyramids to decentralized, service-oriented, and cyber-physical systems (CPSs) is taking place in today’s Industry 4.0 revolution. Generally accepted roles and implementation recipes of cyber systems are expected to be standardized in the future of manufacturing industry. Developing affordable and customizable cyber-physical production system (CPPS) and digital twin implementations infuses new vitality for current Industry 4.0 and Smart Manufacturing initiatives. Specially, Smart Manufacturing systems are currently looking for methods to connect factories to control processes in a more dynamic and open environment by filling the gaps between virtual and physical systems. The work presented in this dissertation first utilizes industrial digital transformation methods for the automation of robotic manufacturing systems, constructing a simulation-based surrogate system as a digital twin to visually represent manufacturing cells, accurately simulate robot behaviors, promptly predict system faults and adaptively control manipulated variables. Then, a CPS-enabled control architecture is presented that accommodates: intelligent information systems involving domain knowledge, empirical model, and simulation; fast and secured industrial communication networks; cognitive automation by rapid signal analytics and machine learning (ML) based feature extraction; and interoperability between machine and human. A successful semantic integration of process indicators is fundamental to future control autonomy. Hence, a product-centered signature mapping approach to automated digital twinning is further presented featuring a hybrid implementation of smart sensing, signature-based 3D shape feature extractor, and knowledge taxonomy. Furthermore, capabilities of members in the family of Deep Reinforcement Learning (DRL) are explored within the context of manufacturing operational control intelligence. Preliminary training results are presented in this work as a trial to incorporate DRL-based Artificial Intelligence (AI) to industrial control processes. The results of this dissertation demonstrate a digital thread of autonomous Smart Manufacturing lifecycle that enables complex signal processing, semantic integration, automatic derivation of manufacturing strategies, intelligent scheduling of operations and virtual verification at a system level. The successful integration of currently available industrial platforms not only provides facile environments for process verification and optimization, but also facilitates derived strategies to be readily deployable to physical shop floor. The dissertation finishes with summary, conclusions, and suggestions for further work
    corecore