1,269 research outputs found

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Conservative occlusion culling for urban visualization using a slice-wise data structure

    Get PDF
    Cataloged from PDF version of article.In this paper, we propose a framework for urban visualization using a conservative from-region visibility algorithm based on occluder shrinking. The visible geometry in a typical urban walkthrough mainly consists of partially visible buildings. Occlusion-culling algorithms, in which the granularity is buildings, process these partially visible buildings as if they are completely visible. To address the problem of partial visibility, we propose a data structure, called slice-wise data structure, that represents buildings in terms of slices parallel to the coordinate axes. We observe that the visible parts of the objects usually have simple shapes. This observation establishes the base for occlusion-culling where the occlusion granularity is individual slices. The proposed slice-wise data structure has minimal storage requirements. We also propose to shrink general 3D occluders in a scene to find volumetric occlusion. Empirical results show that significant increase in frame rates and decrease in the number of processed polygons can be achieved using the proposed slice-wise occlusion-culling as compared to an occlusion-culling method where the granularity is individual buildings. © 2007 Elsevier Inc. All rights reserved

    Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

    Get PDF
    During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliograph

    Enabling Viewpoint Learning through Dynamic Label Generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. We will further release the code and training data, which will to our knowledge be the biggest viewpoint quality dataset available

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    A GROWTH-BASED APPROACH TO THE AUTOMATIC GENERATION OF NAVIGATION MESHES

    Get PDF
    Providing an understanding of space in game and simulation environments is one of the major challenges associated with moving artificially intelligent characters through these environments. The usage of some form of navigation mesh has become the standard method to provide a representation of the walkable space in game environments to characters moving around in that environment. There is currently no standardized best method of producing a navigation mesh. In fact, producing an optimal navigation mesh has been shown to be an NP-Hard problem. Current approaches are a patchwork of divergent methods all of which have issues either in the time to create the navigation meshes (e.g., the best looking navigation meshes have traditionally been produced by hand which is time consuming), generate substandard quality navigation meshes (e.g., many of the automatic mesh production algorithms result in highly triangulated meshes that pose problems for character navigation), or yield meshes that contain gaps of areas that should be included in the mesh and are not (e.g., existing growth-based methods are unable to adapt to non-axis-aligned geometry and as such tend to provide a poor representation of the walkable space in complex environments). We introduce the Planar Adaptive Space Filling Volumes (PASFV) algorithm, Volumetric Adaptive Space Filling Volumes (VASFV) algorithm, and the Iterative Wavefront Edge Expansion Cell Decomposition (Wavefront) algorithm. These algorithms provide growth-based spatial decompositions for navigation mesh generation in either 2D (PASFV) or 3D (VASFV). These algorithms generate quick (on demand) decompositions (Wavefront), use quad/cube base spatial structures to provide more regular regions in the navigation mesh instead of triangles, and offer full coverage decompositions to avoid gaps in the navigation mesh by adapting to non-axis-aligned geometry. We have shown experimentally that the decompositions offered by PASFV and VASFV are superior both in character navigation ability, number of regions, and coverage in comparison to the existing and commonly used techniques of Space Filling Volumes, Hertel-Melhorn decomposition, Delaunay Triangulation, and Automatic Path Node Generation. Finally, we show that our Wavefront algorithm retains the superior performance of the PASFV and VASFV algorithms while providing faster decompositions that contain fewer degenerate and near degenerate regions. Unlike traditional navigation mesh generation techniques, the PASFV and VASFV algorithms have a real time extension (Dynamic Adaptive Space Filling Volumes, DASFV) which allows the navigation mesh to adapt to changes in the geometry of the environment at runtime. In addition, it is possible to use a navigation mesh for applications above and beyond character path planning and navigation. These multiple uses help to increase the return on the investment in creating a navigation mesh for a game or simulation environment. In particular, we will show how to use a navigation mesh for the acceleration of collision detection

    Assisted Agent-Based Simulations: Fusing non-player character movement with Space Syntax

    Get PDF
    Agent-based simulation is one of the core tools of spatial analysis utilised to provide an understanding of space when complex parameters come into play, such as how the visible space changes while traversing a building, or what happens when there is a destination to be reached. This type of simulation has a lot in common with techniques used in video games to create movement trajectories for non-player characters. Although these techniques have been developed over the years to provide more realistic and more “human-like” behaviour, they are rarely woven back into analytical and simulation tools. As a first step to remedy that, we developed a new methodology that fuses non-player character movement from computer games with simulation techniques traditionally used for agent-based analysis in Space Syntax. This first attempt utilises a different type of underlying representation of space, known as a navigation mesh. We first examine in detail two traditional techniques utilised in depthmapX agent-based analysis and highlight their strengths and limitations. We then describe how this technique differs from the classic space syntax methods, as well as how it can be combined to create hybrid analytical models of movement. The hybrid model developed in this case is that of a classic space syntax agent assisted by the aforementioned technique. We then tested and evaluated the traditional and new models for their capacity to explore two gallery spaces. The results extracted from the new hybrid simulation model depict agents with more capacity to explore, a significant addition to the traditional space syntax agent based methods

    Hierarchical Graphs as Organisational Principle and Spatial Model Applied to Pedestrian Indoor Navigation

    Get PDF
    In this thesis, hierarchical graphs are investigated from two different angles – as a general modelling principle for (geo)spatial networks and as a practical means to enhance navigation in buildings. The topics addressed are of interest from a multi-disciplinary point of view, ranging from Computer Science in general over ArtiïŹcial Intelligence and Computational Geometry in particular to other ïŹelds such as Geographic Information Science. Some hierarchical graph models have been previously proposed by the research community, e.g. to cope with the massive size of road networks, or as a conceptual model for human wayïŹnding. However, there has not yet been a comprehensive, systematic approach for modelling spatial networks with hierarchical graphs. One particular problem is the gap between conceptual models and models which can be readily used in practice. Geospatial data is commonly modelled - if at all - only as a ïŹ‚at graph. Therefore, from a practical point of view, it is important to address the automatic construction of a graph hierarchy based on the predominant data models. The work presented deals with this problem: an automated method for construction is introduced and explained. A particular contribution of my thesis is the proposition to use hierarchical graphs as the basis for an extensible, ïŹ‚exible architecture for modelling various (geo)spatial networks. The proposed approach complements classical graph models very well in the sense that their expressiveness is extended: various graphs originating from different sources can be integrated into a comprehensive, multi-level model. This more sophisticated kind of architecture allows for extending navigation services beyond the borders of one single spatial network to a collection of heterogeneous networks, thus establishing a meta-navigation service. Another point of discussion is the impact of the hierarchy and distribution on graph algorithms. They have to be adapted to properly operate on multi-level hierarchies. By investigating indoor navigation problems in particular, the guiding principles are demonstrated for modelling networks at multiple levels of detail. Complex environments like large public buildings are ideally suited to demonstrate the versatile use of hierarchical graphs and thus to highlight the beneïŹts of the hierarchical approach. Starting from a collection of ïŹ‚oor plans, I have developed a systematic method for constructing a multi-level graph hierarchy. The nature of indoor environments, especially their inherent diversity, poses an additional challenge: among others, one must deal with complex, irregular, and/or three-dimensional features. The proposed method is also motivated by practical considerations, such as not only ïŹnding shortest/fastest paths across rooms and ïŹ‚oors, but also by providing descriptions for these paths which are easily understood by people. Beyond this, two novel aspects of using a hierarchy are discussed: one as an informed heuristic exploiting the speciïŹc characteristics of indoor environments in order to enhance classical, general-purpose graph search techniques. At the same time, as a convenient by- product of this method, clusters such as sections and wings can be detected. The other reason is to better deal with irregular, complex-shaped regions in a way that instructions can also be provided for these spaces. Previous approaches have not considered this problem. In summary, the main results of this work are: ‱ hierarchical graphs are introduced as a general spatial data infrastructure. In particular, this architecture allows us to integrate different spatial networks originating from different sources. A small but useful set of operations is proposed for integrating these networks. In order to work in a hierarchical model, classical graph algorithms are generalised. This ïŹnding also has implications on the possible integration of separate navigation services and systems; ‱ a novel set of core data structures and algorithms have been devised for modelling indoor environments. They cater to the unique characteristics of these environments and can be speciïŹcally used to provide enhanced navigation in buildings. Tested on models of several real buildings from our university, some preliminary but promising results were gained from a prototypical implementation and its application on the models

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Investigation of Shadow Matching for GNSS Positioning in Urban Canyons

    Get PDF
    All travel behavior of people in urban areas relies on knowing their position. Obtaining position has become increasingly easier thanks to the vast popularity of ‘smart’ mobile devices. The main and most accurate positioning technique used in these devices is global navigation satellite systems (GNSS). However, the poor performance of GNSS user equipment in urban canyons is a well-known problem and it is particularly inaccurate in the cross-street direction. The accuracy in this direction greatly affects many applications, including vehicle lane identification and high-accuracy pedestrian navigation. Shadow matching is a new technique that helps solve this problem by integrating GNSS constellation geometries and information derived from 3D models of buildings. This study brings the shadow matching principle from a simple mathematical model, through experimental proof of concept, system design and demonstration, algorithm redesign, comprehensive experimental tests, real-time demonstration and feasibility assessment, to a workable positioning solution. In this thesis, GNSS performance in urban canyons is numerically evaluated using 3D models. Then, a generic two-phase 6-step shadow matching system is proposed, implemented and tested against both geodetic and smartphone-grade GNSS receivers. A Bayesian technique-based shadow matching is proposed to account for NLOS and diffracted signal reception. A particle filter is designed to enable multi-epoch kinematic positioning. Finally, shadow matching is adapted and implemented as a mobile application (app), with feasibility assessment conducted. Results from the investigation confirm that conventional ranging-based GNSS is not adequate for reliable urban positioning. The designed shadow matching positioning system is demonstrated complementary to conventional GNSS in improving urban positioning accuracy. Each of the three generations of shadow matching algorithm is demonstrated to provide better positioning performance, supported by comprehensive experiments. In summary, shadow matching has been demonstrated to significantly improve urban positioning accuracy; it shows great potential to revolutionize urban positioning from street level to lane level, and possibly meter level
    • 

    corecore