4,787 research outputs found

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy

    Get PDF
    In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules

    The wake dynamics and flight forces of the fruit fly Drosophila melanogaster

    Get PDF
    We have used flow visualizations and instantaneous force measurements of tethered fruit flies (Drosophila melanogaster) to study the dynamics of force generation during flight. During each complete stroke cycle, the flies generate one single vortex loop consisting of vorticity shed during the downstroke and ventral flip. This gross pattern of wake structure in Drosophila is similar to those described for hovering birds and some other insects. The wake structure differed from those previously described, however, in that the vortex filaments shed during ventral stroke reversal did not fuse to complete a circular ring, but rather attached temporarily to the body to complete an inverted heart-shaped vortex loop. The attached ventral filaments of the loop subsequently slide along the length of the body and eventually fuse at the tip of the abdomen. We found no evidence for the shedding of wing-tip vorticity during the upstroke, and argue that this is due to an extreme form of the Wagner effect acting at that time. The flow visualizations predicted that maximum flight forces would be generated during the downstroke and ventral reversal, with little or no force generated during the upstroke. The instantaneous force measurements using laser-interferometry verified the periodic nature of force generation. Within each stroke cycle, there was one plateau of high force generation followed by a period of low force, which roughly correlated with the upstroke and downstroke periods. However, the fluctuations in force lagged behind their expected occurrence within the wing-stroke cycle by approximately 1 ms or one-fifth of the complete stroke cycle. This temporal discrepancy exceeds the range of expected inaccuracies and artifacts in the measurements, and we tentatively discuss the potential retarding effects within the underlying fluid mechanics

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements

    Learning from Nature: Unsteady Flow Physics in Bioinspired Flapping Flight

    Get PDF
    There are few studies on wing flexibility and the associated aerodynamic performance of insect wings during free flight, which are potential candidates for developing bioinspired microaerial vehicles (MAVs). To this end, this chapter aims at understanding wing deformation and motions of insects through a combined experimental and computational approach. Two sets of techniques are currently being developed to make this integration possible: first, data acquisition through the use of high-speed photogrammetry and accurate data reconstruction to quantify the wing and body motions in free flight with great detail and second, direct numerical simulation (DNS) for force measurements and visualization of vortex structures. Unlike most previous studies that focus on the near-field vortex formation mechanisms of a single rigid flapping wing, this chapter presents freely flying insects with full-field vortex structures and associated unsteady aerodynamics at low Reynolds numbers. Our chapter is expected to lead to valuable insights into the underlying physics about flow mechanisms of low Reynolds number flight in nature, which will have great significance to flapping-wing MAV design and optimization research in the future

    Automatic Reconstruction of Textured 3D Models

    Get PDF
    Three dimensional modeling and visualization of environments is an increasingly important problem. This work addresses the problem of automatic 3D reconstruction and we present a system for unsupervised reconstruction of textured 3D models in the context of modeling indoor environments. We present solutions to all aspects of the modeling process and an integrated system for the automatic creation of large scale 3D models

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    The quantitative analysis of transonic flows by holographic interferometry

    Get PDF
    This thesis explores the feasibility of routine transonic flow analysis by holographic interferometry. Holography is potentially an important quantitative flow diagnostic, because whole-field data is acquired non-intrusively without the use of particle seeding. Holographic recording geometries are assessed and an image plane specular illumination configuration is shown to reduce speckle noise and maximise the depth-of-field of the reconstructed images. Initially, a NACA 0012 aerofoil is wind tunnel tested to investigate the analysis of two-dimensional flows. A method is developed for extracting whole-field density data from the reconstructed interferograms. Fringe analysis errors axe quantified using a combination of experimental and computer generated imagery. The results are compared quantitatively with a laminar boundary layer Navier-Stokes computational fluid dynamics (CFD) prediction. Agreement of the data is excellent, except in the separated wake where the experimental boundary layer has undergone turbulent transition. A second wind tunnel test, on a cone-cylinder model, demonstrates the feasibility of recording multi-directional interferometric projections using holographic optical elements (HOE’s). The prototype system is highly compact and combines the versatility of diffractive elements with the efficiency of refractive components. The processed interferograms are compared to an integrated Euler CFD prediction and it is shown that the experimental shock cone is elliptical due to flow confinement. Tomographic reconstruction algorithms are reviewed for analysing density projections of a three-dimensional flow. Algebraic reconstruction methods are studied in greater detail, because they produce accurate results when the data is ill-posed. The performance of these algorithms is assessed using CFD input data and it is shown that a reconstruction accuracy of approximately 1% may be obtained when sixteen projections are recorded over a viewing angle of ±58°. The effect of noise on the data is also quantified and methods are suggested for visualising and reconstructing obstructed flow regions
    • …
    corecore