709 research outputs found

    Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities

    Get PDF
    Interactive, realistic rendering of landscapes and cities differs substantially from classical terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models. Even the design and implementation of efficient, automatic LOD generation is ambitious for such out-of-core datasets considering the large number of scales that are covered in a single view and the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want to interact with the model based on semantic information which needs to be linked to the LOD model. In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and an approach allowing semantic interaction with complex LOD models. The hierarchical LOD model for digital surface models is based on a quadtree of precomputed, simplified triangle mesh approximations. The rendering of the proposed model is proved to allow real-time rendering of very large and complex models with pixel-accurate details. Moreover, the necessary preprocessing is scalable and fast. For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon representations. For each LOD, the algorithm detects planar regions in an adequately subsampled point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is an order of magnitude faster than comparable point-based LOD schemes. To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart from the geometric distance between simplified and original model, it evaluates constraints based on detected planar structures and their mutual topological relations. The resulting models are much less complex than the original DSM but still represent the characteristic building structures faithfully. Finally, I present a method to combine semantic information with complex geometric models. My approach links the semantic entities to the geometric entities on-the-fly via coarser proxy geometries which carry the semantic information. Thus, semantic information can be layered on top of complex LOD models without an explicit attribution step. All findings are supported by experimental results which demonstrate the practical applicability and efficiency of the methods

    Multi-Modal Mean-Fields via Cardinality-Based Clamping

    Get PDF
    Mean Field inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, since it models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. We therefore replace the fully factorized distribution of Mean Field by a weighted mixture of such distributions, that similarly minimizes the KL-Divergence to the true posterior. By introducing two new ideas, namely, conditioning on groups of variables instead of single ones and using a parameter of the conditional random field potentials, that we identify to the temperature in the sense of statistical physics to select such groups, we can perform this minimization efficiently. Our extension of the clamping method proposed in previous works allows us to both produce a more descriptive approximation of the true posterior and, inspired by the diverse MAP paradigms, fit a mixture of Mean Field approximations. We demonstrate that this positively impacts real-world algorithms that initially relied on mean fields.Comment: Submitted for review to CVPR 201

    Quad Meshing

    Get PDF
    Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this State of the Art Report, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing

    Flow pattern analysis for magnetic resonance velocity imaging

    Get PDF
    Blood flow in the heart is highly complex. Although blood flow patterns have been investigated by both computational modelling and invasive/non-invasive imaging techniques, their evolution and intrinsic connection with cardiovascular disease has yet to be explored. Magnetic resonance (MR) velocity imaging provides a comprehensive distribution of multi-directional in vivo flow distribution so that detailed quantitative analysis of flow patterns is now possible. However, direct visualisation or quantification of vector fields is of little clinical use, especially for inter-subject or serial comparison of changes in flow patterns due to the progression of the disease or in response to therapeutic measures. In order to achieve a comprehensive and integrated description of flow in health and disease, it is necessary to characterise and model both normal and abnormal flows and their effects. To accommodate the diversity of flow patterns in relation to morphological and functional changes, we have described in this thesis an approach of detecting salient topological features prior to analytical assessment of dynamical indices of the flow patterns. To improve the accuracy of quantitative analysis of the evolution of topological flow features, it is essential to restore the original flow fields so that critical points associated with salient flow features can be more reliably detected. We propose a novel framework for the restoration, abstraction, extraction and tracking of flow features such that their dynamic indices can be accurately tracked and quantified. The restoration method is formulated as a constrained optimisation problem to remove the effects of noise and to improve the consistency of the MR velocity data. A computational scheme is derived from the First Order Lagrangian Method for solving the optimisation problem. After restoration, flow abstraction is applied to partition the entire flow field into clusters, each of which is represented by a local linear expansion of its velocity components. This process not only greatly reduces the amount of data required to encode the velocity distribution but also permits an analytical representation of the flow field from which critical points associated with salient flow features can be accurately extracted. After the critical points are extracted, phase portrait theory can be applied to separate them into attracting/repelling focuses, attracting/repelling nodes, planar vortex, or saddle. In this thesis, we have focused on vortical flow features formed in diastole. To track the movement of the vortices within a cardiac cycle, a tracking algorithm based on relaxation labelling is employed. The constraints and parameters used in the tracking algorithm are designed using the characteristics of the vortices. The proposed framework is validated with both simulated and in vivo data acquired from patients with sequential MR examination following myocardial infarction. The main contribution of the thesis is in the new vector field restoration and flow feature abstraction method proposed. They allow the accurate tracking and quantification of dynamic indices associated with salient features so that inter- and intra-subject comparisons can be more easily made. This provides further insight into the evolution of blood flow patterns and permits the establishment of links between blood flow patterns and localised genesis and progression of cardiovascular disease.Open acces

    Algorithms for sliding block codes - An application of symbolic dynamics to information theory

    Full text link

    Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification

    Get PDF
    Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide
    • …
    corecore