668 research outputs found

    Flexural Vibration Measurement and Sound Radiation Estimate of Thin Structures with Multiple Cameras

    Get PDF
    This thesis presents a simulation and experimental study focussed on the measurement of flexural vibration and on the estimate of the sound radiation of distributed structures by optical means and in particular by using multiple, i.e. more than two, synchronous cameras. The study considers two model problems composed by a cantilever beam and a plate excited by a tonal force at the first three fundamental resonance frequencies of flexural vibrations. The study has therefore considered the measurement of the deflection shapes at these frequencies, which accurately approximates the first three flexural mode shapes. The study is organized in four parts. The first part introduces the state of the art about the topic and revises the theoretical principles concerning optical measurements. The second part presents a simplified optical model employed to simulate how the accuracy of the measurements of the first three flexural deflection shapes of the structures here considered varies with respect to: a) the distance of the cameras from the structure; b) the angle of aperture between pairs of cameras; c) the elevation angle formed by the optical axis of the camera and the plane of the structure; d) the resolution of the cameras and e) the number of cameras. The principal objective of the study was indeed to show how the accuracy of the measurements can be significantly increased by using multiple cameras. The third part of the study provided experimental results taken on a beam rig and camera setup assembled using off-the-shelf devices. The experimental study focussed on the first flexural deflection shape of a cantilever beam and confirmed the findings of the simulation studies. The simulations and experiments presented in this work, quantify and confirm that the use of multiple cameras allows good vibration measurement accuracy, even at low spatial camera resolutions. Since the frame-rate and cost of cameras is limited by the amount of data they can process in each time unit, these results suggest that multiple, relatively cheap, high-speed, low-spatial resolution cameras can be used to perform vibration measurements in practical applications. The fourth part of the study examines the sound radiation generated by vibrating structures. In particular, it is evaluated how the accuracy of the estimate of the sound radiation emitted by the reconstructed first three flexural deflection shapes of a plate varies with respect to: a) the distance of the cameras from the structure; b) the azimuthal angle between the cameras; c) the elevation angle of the cameras; d) the resolution of cameras and e) the number of cameras. The principal objective of this fourth part was to understand if the results obtained on the influence on the flexural vibration measurements of the parameters listed above, could be applied to the case study of the estimate of the sound radiation

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Safe localization and navigation for UAV-based inspection in confined spaces

    Get PDF
    In this thesis a multi Visual Inertial Odometry (VIO) sensor fusion algorithm is implemented in order to provide an accurate and robust absolute localization of an Unmanned Aerial Vehicle in a confined space. Specifically, this work is part of a bigger project called Inspectrone, that tries to automatize inspection inside ballast tank of a ship by using Unmanned Aerial Vehicles (UAVs). In this environment no accurate GPS can be adopted due the fact that the ship will shield the signal coming from the satellites. The developed solution adopts cameras in order to provide a relative positioning reference for the drone. In this project, it is investigated different types of Extended Kalman Filter configuration, by implementing them in ROS (Robot Operating System), and examine how to merge data coming from multiple RealSense T265 cameras. In addition, a novel version of a quadcopter was built in order to test the sensor fusion algorithm, and compare it with the current variant of the drone used in the Inspectrone project. The results show that the proposed filter, is able to deal with faulty and malfunctioning sensors, always keeping the drone aware of its position in the space. Moreover, the algorithm is capable of outperforming the current relative position system employed in the initial version of the drone.In this thesis a multi Visual Inertial Odometry (VIO) sensor fusion algorithm is implemented in order to provide an accurate and robust absolute localization of an Unmanned Aerial Vehicle in a confined space. Specifically, this work is part of a bigger project called Inspectrone, that tries to automatize inspection inside ballast tank of a ship by using Unmanned Aerial Vehicles (UAVs). In this environment no accurate GPS can be adopted due the fact that the ship will shield the signal coming from the satellites. The developed solution adopts cameras in order to provide a relative positioning reference for the drone. In this project, it is investigated different types of Extended Kalman Filter configuration, by implementing them in ROS (Robot Operating System), and examine how to merge data coming from multiple RealSense T265 cameras. In addition, a novel version of a quadcopter was built in order to test the sensor fusion algorithm, and compare it with the current variant of the drone used in the Inspectrone project. The results show that the proposed filter, is able to deal with faulty and malfunctioning sensors, always keeping the drone aware of its position in the space. Moreover, the algorithm is capable of outperforming the current relative position system employed in the initial version of the drone

    Spatial Localization of EEG Electrodes in a TOF+CCD Camera System

    Get PDF
    A crucial link of electroencephalograph (EEG) technology is the accurate estimation of EEG electrode positions on a specific human head, which is very useful for precise analysis of brain functions. Photogrammetry has become an effective method in this field. This study aims to propose a more reliable and efficient method which can acquire 3D information conveniently and locate the source signal accurately in real-time. The main objective is identification and 3D location of EEG electrode positions using a system consisting of CCD cameras and Time-of-Flight (TOF) cameras. To calibrate the camera group accurately, differently to the previous camera calibration approaches, a method is introduced in this report which uses the point cloud directly rather than the depth image. Experimental results indicate that the typical distance error of reconstruction in this study is 3.26 mm for real-time applications, which is much better than the widely used electromagnetic method in clinical medicine. The accuracy can be further improved to a great extent by using a high-resolution camera

    Learning Visual Patterns: Imposing Order on Objects, Trajectories and Networks

    Get PDF
    Fundamental to many tasks in the field of computer vision, this work considers the understanding of observed visual patterns in static images and dynamic scenes . Within this broad domain, we focus on three particular subtasks, contributing novel solutions to: (a) the subordinate categorization of objects (avian species specifically), (b) the analysis of multi-agent interactions using the agent trajectories, and (c) the estimation of camera network topology. In contrast to object recognition, where the presence or absence of certain parts is generally indicative of basic-level category, the problem of subordinate categorization rests on the ability to establish salient distinctions amongst the characteristics of those parts which comprise the basic-level category. Focusing on an avian domain due to the fine-grained structure of the category taxonomy, we explore a pose-normalized appearance model based on a volumetric poselet scheme. The variation in shape and appearance properties of these parts across a taxonomy provides the cues needed for subordinate categorization. Our model associates the underlying image pattern parameters used for detection with corresponding volumetric part location, scale and orientation parameters. These parameters implicitly define a mapping from the image pixels into a pose-normalized appearance space, removing view and pose dependencies, facilitating fine-grained categorization with relatively few training examples. We next examine the problem of leveraging trajectories to understand interactions in dynamic multi-agent environments. We focus on perceptual tasks, those for which an agent's behavior is governed largely by the individuals and objects around them. We introduce kinetic accessibility, a model for evaluating the perceived, and thus anticipated, movements of other agents. This new model is then applied to the analysis of basketball footage. The kinetic accessibility measures are coupled with low-level visual cues and domain-specific knowledge for determining which player has possession of the ball and for recognizing events such as passes, shots and turnovers. Finally, we present two differing approaches for estimating camera network topology. The first technique seeks to partition a set of observations made in the camera network into individual object trajectories. As exhaustive consideration of the partition space is intractable, partitions are considered incrementally, adding observations while pruning unlikely partitions. Partition likelihood is determined by the evaluation of a probabilistic graphical model, balancing the consistency of appearances across a hypothesized trajectory with the latest predictions of camera adjacency. A primarily benefit of estimating object trajectories is that higher-order statistics, as opposed to just first-order adjacency, can be derived, yielding resilience to camera failure and the potential for improved tracking performance between cameras. Unlike the former centralized technique, the latter takes a decentralized approach, estimating the global network topology with local computations using sequential Bayesian estimation on a modified multinomial distribution. Key to this method is an information-theoretic appearance model for observation weighting. The inherently distributed nature of the approach allows the simultaneous utilization of all sensors as processing agents in collectively recovering the network topology
    • …
    corecore