4,828 research outputs found

    Real-time High Resolution Fusion of Depth Maps on GPU

    Full text link
    A system for live high quality surface reconstruction using a single moving depth camera on a commodity hardware is presented. High accuracy and real-time frame rate is achieved by utilizing graphics hardware computing capabilities via OpenCL and by using sparse data structure for volumetric surface representation. Depth sensor pose is estimated by combining serial texture registration algorithm with iterative closest points algorithm (ICP) aligning obtained depth map to the estimated scene model. Aligned surface is then fused into the scene. Kalman filter is used to improve fusion quality. Truncated signed distance function (TSDF) stored as block-based sparse buffer is used to represent surface. Use of sparse data structure greatly increases accuracy of scanned surfaces and maximum scanning area. Traditional GPU implementation of volumetric rendering and fusion algorithms were modified to exploit sparsity to achieve desired performance. Incorporation of texture registration for sensor pose estimation and Kalman filter for measurement integration improved accuracy and robustness of scanning process

    Efficient Hybrid Image Warping for High Frame-Rate Stereoscopic Rendering

    Get PDF
    Modern virtual reality simulations require a constant high-frame rate from the rendering engine. They may also require very low latency and stereo images. Previous rendering engines for virtual reality applications have exploited spatial and temporal coherence by using image-warping to re-use previous frames or to render a stereo pair at lower cost than running the full render pipeline twice. However these previous approaches have shown artifacts or have not scaled well with image size. We present a new image-warping algorithm that has several novel contributions: an adaptive grid generation algorithm for proxy geometry for image warping; a low-pass hole-filling algorithm to address un-occlusion; and support for transparent surfaces by efficiently ray casting transparent fragments stored in per-pixel linked lists of an A-Buffer. We evaluate our algorithm with a variety of challenging test cases. The results show that it achieves better quality image-warping than state-of-the-art techniques and that it can support transparent surfaces effectively. Finally, we show that our algorithm can achieve image warping at rates suitable for practical use in a variety of applications on modern virtual reality equipment

    An Image-Based Tool to Examine Joint Congruency at the Elbow

    Get PDF
    Post-traumatic osteoarthritis commonly occurs as a result of a traumatic event to the articulation. Although the majority of this type of arthritis is preventable, the sequence and mechanism of the interaction between joint injury and the development of osteoarthritis (OA) is not well understood. It is hypothesized that alterations to the joint alignment can cause excessive and damaging wear to the cartilage surfaces resulting in OA. The lack of understanding of both the cause and progression of OA has contributed to the slow development of interventions which can modify the course of the disease. Currently, there have been no reported techniques that have been developed to examine the relationship between joint injury and joint alignment. Therefore, the objective of this thesis was to develop a non-invasive image-based technique that can be used to assess joint congruency and alignment of joints undergoing physiologic motion. An inter-bone distance algorithm was developed and validated to measure joint congruency at the ulnohumeral joint of the elbow. Subsequently, a registration algorithm was created and its accuracy was assessed. This registration algorithm registered 3D reconstructed bone models obtained using x-ray CT to motion capture data of cadaveric upper extremities undergoing simulated elbow flexion. In this way, the relative position and orientation of the 3D bone models could be visualized for any frame of motion. The effect of radial head arthroplasty was used to illustrate the utility of this technique. Once this registration was refined, the inter-bone distance algorithm was integrated to be able to visualize the joint congruency of the ulnohumeral joint undergoing simulated elbow flexion. The effect of collateral ligament repair was examined. This technique proved to be sensitive enough to detect large changes in joint congruency in spite of only small changes in the motion pathways of the ulnohumeral joint following simulated ligament repair. Efforts were also made in this thesis to translate this research into a clinical environment by examining CT scanning protocols that could reduce the amount of radiation exposure required to image patient’s joints. For this study, the glenohumeral joint of the shoulder was examined as this joint is particularly sensitive to potential harmful effects of radiation due to its proximity to highly radiosensitive organs. Using the CT scanning techniques examined in this thesis, the effective dose applied to the shoulder was reduced by almost 90% compared to standard clinical CT imaging. In summary, these studies introduced a technique that can be used to non-invasively and three-dimensionally examine joint congruency. The accuracy of this technique was assessed and its ability to predict regions of joint surface interactions was validated against a gold standard casting approach. Using the techniques developed in this thesis the complex relationship between injury, loading and mal-alignment as contributors to the development and progression of osteoarthritis in the upper extremity can be examined

    AdaSplats: Adaptive Splatting of Point Clouds for Accurate 3D Modeling and Real-time High-Fidelity LiDAR Simulation

    Full text link
    LiDAR sensors provide rich 3D information about their surrounding and are becoming increasingly important for autonomous vehicles tasks, such as semantic segmentation, object detection, and tracking. Simulating a LiDAR sensor accelerates the testing, validation, and deployment of autonomous vehicles, while reducing the cost and eliminating the risks of testing in real-world scenarios. We address the problem of high-fidelity LiDAR simulation and present a pipeline that leverages real-world point clouds acquired by mobile mapping systems. Point-based geometry representations, more specifically splats, have proven their ability to accurately model the underlying surface in very large point clouds. We introduce an adaptive splats generation method that accurately models the underlying 3D geometry, especially for thin structures. Moreover, we introduce a physics-based, faster-than-real-time LiDAR simulator, in the splatted model, leveraging the GPU parallel architecture with an acceleration structure, while focusing on efficiently handling large point clouds. We test our LiDAR simulation in real-world conditions, showing qualitative and quantitative results compared to basic splatting and meshing techniques, demonstrating the interest of our modeling technique.Comment: 28 pages, 11 figures, 6 table

    Visualisation of multi-dimensional medical images with application to brain electrical impedance tomography

    Get PDF
    Medical imaging plays an important role in modem medicine. With the increasing complexity and information presented by medical images, visualisation is vital for medical research and clinical applications to interpret the information presented in these images. The aim of this research is to investigate improvements to medical image visualisation, particularly for multi-dimensional medical image datasets. A recently developed medical imaging technique known as Electrical Impedance Tomography (EIT) is presented as a demonstration. To fulfil the aim, three main efforts are included in this work. First, a novel scheme for the processmg of brain EIT data with SPM (Statistical Parametric Mapping) to detect ROI (Regions of Interest) in the data is proposed based on a theoretical analysis. To evaluate the feasibility of this scheme, two types of experiments are carried out: one is implemented with simulated EIT data, and the other is performed with human brain EIT data under visual stimulation. The experimental results demonstrate that: SPM is able to localise the expected ROI in EIT data correctly; and it is reasonable to use the balloon hemodynamic change model to simulate the impedance change during brain function activity. Secondly, to deal with the absence of human morphology information in EIT visualisation, an innovative landmark-based registration scheme is developed to register brain EIT image with a standard anatomical brain atlas. Finally, a new task typology model is derived for task exploration in medical image visualisation, and a task-based system development methodology is proposed for the visualisation of multi-dimensional medical images. As a case study, a prototype visualisation system, named EIT5DVis, has been developed, following this methodology. to visualise five-dimensional brain EIT data. The EIT5DVis system is able to accept visualisation tasks through a graphical user interface; apply appropriate methods to analyse tasks, which include the ROI detection approach and registration scheme mentioned in the preceding paragraphs; and produce various visualisations

    Image informatics strategies for deciphering neuronal network connectivity

    Get PDF
    Brain function relies on an intricate network of highly dynamic neuronal connections that rewires dramatically under the impulse of various external cues and pathological conditions. Among the neuronal structures that show morphologi- cal plasticity are neurites, synapses, dendritic spines and even nuclei. This structural remodelling is directly connected with functional changes such as intercellular com- munication and the associated calcium-bursting behaviour. In vitro cultured neu- ronal networks are valuable models for studying these morpho-functional changes. Owing to the automation and standardisation of both image acquisition and image analysis, it has become possible to extract statistically relevant readout from such networks. Here, we focus on the current state-of-the-art in image informatics that enables quantitative microscopic interrogation of neuronal networks. We describe the major correlates of neuronal connectivity and present workflows for analysing them. Finally, we provide an outlook on the challenges that remain to be addressed, and discuss how imaging algorithms can be extended beyond in vitro imaging studies

    Parzsweep: A Novel Parallel Algorithm for Volume Rendering of Regular Datasets

    Get PDF
    The sweep paradigm for volume rendering has previously been successfully applied with irregular grids. This thesis describes a parallel volume rendering algorithm called PARZSweep for regular grids that utilizes the sweep paradigm. The sweep paradigm is a concept where a plane sweeps the data volume parallel to the viewing direction. As the sweeping proceeds in the increasing order of z, the faces incident on the vertices are projected onto the viewing volume to constitute to the image. The sweeping ensures that all faces are projected in the correct order and the image thus obtained is very accurate in its details. PARZSweep is an extension of a serial algorithm for regular grids called RZSweep. The hypothesis of this research is that a parallel version of RZSweep can be designed and implemented which will utilize multiple processors to reduce rendering times. PARZSweep follows an approach called image-based task scheduling or tiling. This approach divides the image space into tiles and allocates each tile to a processor for individual rendering. The sub images are composite to form a complete final image. PARZSweep uses a shared memory architecture in order to take advantage of inherent cache coherency for faster communication between processor. Experiments were conducted comparing RZSweep and PARZSweep with respect to prerendering times, rendering times and image quality. RZSweep and PARZSweep have approximately the same prerendering costs, produce exactly the same images and PARZSweep substantially reduced rendering times. PARZSweep was evaluated for scalability with respect to the number of tiles and number of processors. Scalability results were disappointing due to uneven data distribution

    Robotic Cameraman for Augmented Reality based Broadcast and Demonstration

    Get PDF
    In recent years, a number of large enterprises have gradually begun to use vari-ous Augmented Reality technologies to prominently improve the audiences’ view oftheir products. Among them, the creation of an immersive virtual interactive scenethrough the projection has received extensive attention, and this technique refers toprojection SAR, which is short for projection spatial augmented reality. However,as the existing projection-SAR systems have immobility and limited working range,they have a huge difficulty to be accepted and used in human daily life. Therefore,this thesis research has proposed a technically feasible optimization scheme so thatit can be practically applied to AR broadcasting and demonstrations. Based on three main techniques required by state-of-art projection SAR applica-tions, this thesis has created a novel mobile projection SAR cameraman for ARbroadcasting and demonstration. Firstly, by combining the CNN scene parsingmodel and multiple contour extractors, the proposed contour extraction pipelinecan always detect the optimal contour information in non-HD or blurred images.This algorithm reduces the dependency on high quality visual sensors and solves theproblems of low contour extraction accuracy in motion blurred images. Secondly, aplane-based visual mapping algorithm is introduced to solve the difficulties of visualmapping in these low-texture scenarios. Finally, a complete process of designing theprojection SAR cameraman robot is introduced. This part has solved three mainproblems in mobile projection-SAR applications: (i) a new method for marking con-tour on projection model is proposed to replace the model rendering process. Bycombining contour features and geometric features, users can identify objects oncolourless model easily. (ii) a camera initial pose estimation method is developedbased on visual tracking algorithms, which can register the start pose of robot to thewhole scene in Unity3D. (iii) a novel data transmission approach is introduced to establishes a link between external robot and the robot in Unity3D simulation work-space. This makes the robotic cameraman can simulate its trajectory in Unity3D simulation work-space and project correct virtual content. Our proposed mobile projection SAR system has made outstanding contributionsto the academic value and practicality of the existing projection SAR technique. Itfirstly solves the problem of limited working range. When the system is running ina large indoor scene, it can follow the user and project dynamic interactive virtualcontent automatically instead of increasing the number of visual sensors. Then,it creates a more immersive experience for audience since it supports the user hasmore body gestures and richer virtual-real interactive plays. Lastly, a mobile systemdoes not require up-front frameworks and cheaper and has provided the public aninnovative choice for indoor broadcasting and exhibitions
    • …
    corecore