212 research outputs found
Topological correction of hypertextured implicit surfaces for ray casting
Hypertextures are a useful modelling tool in that they
can add three-dimensional detail to the surface of otherwise
smooth objects. Hypertextures can be rendered as implicit
surfaces, resulting in objects with a complex but well
defined boundary. However, representing a hypertexture as
an implicit surface often results in many small parts being
detached from the main surface, turning an object into a
disconnected set. Depending on the context, this can detract
from the realism in a scene where one usually does not
expect a solid object to have clouds of smaller objects floating around it. We present a topology correction technique, integrated in a ray casting algorithm for hypertextured implicit surfaces, that detects and removes all the surface components that have become disconnected from the main surface. Our method works with implicit surfaces that are C2 continuous and uses Morse theory to find the critical points of the surface. The method follows the separatrix lines joining the critical points to isolate disconnected components
Accurate geometry reconstruction of vascular structures using implicit splines
3-D visualization of blood vessel from standard medical datasets (e.g. CT or MRI) play an important role in many clinical situations, including the diagnosis of vessel stenosis, virtual angioscopy, vascular surgery planning and computer aided vascular surgery. However, unlike other human organs, the vasculature system is a very complex network of vessel, which makes it a very challenging task to perform its 3-D visualization. Conventional techniques of medical volume data visualization are in general not well-suited for the above-mentioned tasks. This problem can be solved by reconstructing vascular geometry. Although various methods have been proposed for reconstructing vascular structures, most of these approaches are model-based, and are usually too ideal to correctly represent the actual variation presented by the cross-sections of a vascular structure. In addition, the underlying shape is usually expressed as polygonal meshes or in parametric forms, which is very inconvenient for implementing ramification of branching. As a result, the reconstructed geometries are not suitable for computer aided diagnosis and computer guided minimally invasive vascular surgery. In this research, we develop a set of techniques associated with the geometry reconstruction of vasculatures, including segmentation, modelling, reconstruction, exploration and rendering of vascular structures. The reconstructed geometry can not only help to greatly enhance the visual quality of 3-D vascular structures, but also provide an actual geometric representation of vasculatures, which can provide various benefits. The key findings of this research are as follows: 1. A localized hybrid level-set method of segmentation has been developed to extract the vascular structures from 3-D medical datasets. 2. A skeleton-based implicit modelling technique has been proposed and applied to the reconstruction of vasculatures, which can achieve an accurate geometric reconstruction of the vascular structures as implicit surfaces in an analytical form. 3. An accelerating technique using modern GPU (Graphics Processing Unit) is devised and applied to rendering the implicitly represented vasculatures. 4. The implicitly modelled vasculature is investigated for the application of virtual angioscopy
Topological modifications of animated surfaces
National audienceTo better understand the purposes of this work, I will first introduce the mathematical notions I used during this internship, as well as the applications that this work could have in the future. This work is mainly based on the Morse Theory and on the algorithms presented in [3] and [5], that is why I will give some details about my implementation and its integration in the original program. It is the starting point for the research step because it highlights the main problems of the previous solutions: thus, I will expose the ideas which have been proposed, implemented and tested. Finally, I will analyze the results provided by this new method, but also its limitations
Efficient Methods for Computational Light Transport
En esta tesis presentamos contribuciones sobre distintos retos computacionales relacionados con transporte de luz. Los algoritmos que utilizan información sobre el transporte de luz están presentes en muchas aplicaciones de hoy en día, desde la generación de efectos visuales, a la detección de objetos en tiempo real. La luz es una valiosa fuente de información que nos permite entender y representar nuestro entorno, pero obtener y procesar esta información presenta muchos desafíos debido a la complejidad de las interacciones entre la luz y la materia. Esta tesis aporta contribuciones en este tema desde dos puntos de vista diferentes: algoritmos en estado estacionario, en los que se asume que la velocidad de la luz es infinita; y algoritmos en estado transitorio, que tratan la luz no solo en el dominio espacial, sino también en el temporal. Nuestras contribuciones en algoritmos estacionarios abordan problemas tanto en renderizado offline como en tiempo real. Nos enfocamos en la reducción de varianza para métodos offline,proponiendo un nuevo método para renderizado eficiente de medios participativos. En renderizado en tiempo real, abordamos las limitacionesde consumo de batería en dispositivos móviles proponiendo un sistema de renderizado que incrementa la eficiencia energética en aplicaciones gráficas en tiempo real. En el transporte de luz transitorio, formalizamos la simulación de este tipo transporte en este nuevo dominio, y presentamos nuevos algoritmos y métodos para muestreo eficiente para render transitorio. Finalmente, demostramos la utilidad de generar datos en este dominio, presentando un nuevo método para corregir interferencia multi-caminos en camaras Timeof- Flight, un problema patológico en el procesamiento de imágenes transitorias.n this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.<br /
Semantic Validation in Structure from Motion
The Structure from Motion (SfM) challenge in computer vision is the process
of recovering the 3D structure of a scene from a series of projective
measurements that are calculated from a collection of 2D images, taken from
different perspectives. SfM consists of three main steps; feature detection and
matching, camera motion estimation, and recovery of 3D structure from estimated
intrinsic and extrinsic parameters and features.
A problem encountered in SfM is that scenes lacking texture or with
repetitive features can cause erroneous feature matching between frames.
Semantic segmentation offers a route to validate and correct SfM models by
labelling pixels in the input images with the use of a deep convolutional
neural network. The semantic and geometric properties associated with classes
in the scene can be taken advantage of to apply prior constraints to each class
of object. The SfM pipeline COLMAP and semantic segmentation pipeline DeepLab
were used. This, along with planar reconstruction of the dense model, were used
to determine erroneous points that may be occluded from the calculated camera
position, given the semantic label, and thus prior constraint of the
reconstructed plane. Herein, semantic segmentation is integrated into SfM to
apply priors on the 3D point cloud, given the object detection in the 2D input
images. Additionally, the semantic labels of matched keypoints are compared and
inconsistent semantically labelled points discarded. Furthermore, semantic
labels on input images are used for the removal of objects associated with
motion in the output SfM models. The proposed approach is evaluated on a
data-set of 1102 images of a repetitive architecture scene. This project offers
a novel method for improved validation of 3D SfM models
Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.
This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments.
We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd
Glossy Probe Reprojection for Interactive Global Illumination
International audienceRecent rendering advances dramatically reduce the cost of global illumination. But even with hardware acceleration, complex light paths with multiple glossy interactions are still expensive; our new algorithm stores these paths in precomputed light probes and reprojects them at runtime to provide interactivity. Combined with traditional light maps for diffuse lighting our approach interactively renders all light paths in static scenes with opaque objects. Naively reprojecting probes with glossy lighting is memory-intensive, requires efficient access to the correctly reflected radiance, and exhibits problems at occlusion boundaries in glossy reflections. Our solution addresses all these issues. To minimize memory, we introduce an adaptive light probe parameterization that allocates increased resolution for shinier surfaces and regions of higher geometric complexity. To efficiently sample glossy paths, our novel gathering algorithm reprojects probe texels in a view-dependent manner using efficient reflection estimation and a fast rasterization-based search. Naive probe reprojection often sharpens glossy reflections at occlusion boundaries, due to changes in parallax. To avoid this, we split the convolution induced by the BRDF into two steps: we precompute probes using a lower material roughness and apply an adaptive bilateral filter at runtime to reproduce the original surface roughness. Combining these elements, our algorithm interactively renders complex scenes while fitting in the memory, bandwidth, and computation constraints of current hardware
Geometric algorithms for cavity detection on protein surfaces
Macromolecular structures such as proteins heavily empower cellular processes or functions.
These biological functions result from interactions between proteins and peptides,
catalytic substrates, nucleotides or even human-made chemicals. Thus, several
interactions can be distinguished: protein-ligand, protein-protein, protein-DNA,
and so on. Furthermore, those interactions only happen under chemical- and shapecomplementarity
conditions, and usually take place in regions known as binding sites.
Typically, a protein consists of four structural levels. The primary structure of a protein
is made up of its amino acid sequences (or chains). Its secondary structure essentially
comprises -helices and -sheets, which are sub-sequences (or sub-domains) of amino
acids of the primary structure. Its tertiary structure results from the composition of
sub-domains into domains, which represent the geometric shape of the protein. Finally,
the quaternary structure of a protein results from the aggregate of two or more
tertiary structures, usually known as a protein complex.
This thesis fits in the scope of structure-based drug design and protein docking. Specifically,
one addresses the fundamental problem of detecting and identifying protein
cavities, which are often seen as tentative binding sites for ligands in protein-ligand
interactions. In general, cavity prediction algorithms split into three main categories:
energy-based, geometry-based, and evolution-based. Evolutionary methods build upon
evolutionary sequence conservation estimates; that is, these methods allow us to detect
functional sites through the computation of the evolutionary conservation of the
positions of amino acids in proteins. Energy-based methods build upon the computation
of interaction energies between protein and ligand atoms. In turn, geometry-based algorithms
build upon the analysis of the geometric shape of the protein (i.e., its tertiary
structure) to identify cavities. This thesis focuses on geometric methods.
We introduce here three new geometric-based algorithms for protein cavity detection.
The main contribution of this thesis lies in the use of computer graphics techniques
in the analysis and recognition of cavities in proteins, much in the spirit of molecular
graphics and modeling. As seen further ahead, these techniques include field-of-view
(FoV), voxel ray casting, back-face culling, shape diameter functions, Morse theory,
and critical points. The leading idea is to come up with protein shape segmentation,
much like we commonly do in mesh segmentation in computer graphics. In practice,
protein cavity algorithms are nothing more than segmentation algorithms designed for
proteins.Estruturas macromoleculares tais como as proteínas potencializam processos ou funções
celulares. Estas funções resultam das interações entre proteínas e peptídeos, substratos
catalíticos, nucleótideos, ou até mesmo substâncias químicas produzidas pelo
homem. Assim, há vários tipos de interacções: proteína-ligante, proteína-proteína,
proteína-DNA e assim por diante. Além disso, estas interações geralmente ocorrem em
regiões conhecidas como locais de ligação (binding sites, do inglês) e só acontecem sob
condições de complementaridade química e de forma. É também importante referir que
uma proteína pode ser estruturada em quatro níveis. A estrutura primária que consiste
em sequências de aminoácidos (ou cadeias), a estrutura secundária que compreende
essencialmente por hélices e folhas , que são subsequências (ou subdomínios) dos
aminoácidos da estrutura primária, a estrutura terciária que resulta da composição de
subdomínios em domínios, que por sua vez representa a forma geométrica da proteína,
e por fim a estrutura quaternária que é o resultado da agregação de duas ou mais estruturas
terciárias. Este último nível estrutural é frequentemente conhecido por um
complexo proteico.
Esta tese enquadra-se no âmbito da conceção de fármacos baseados em estrutura e no
acoplamento de proteínas. Mais especificamente, aborda-se o problema fundamental
da deteção e identificação de cavidades que são frequentemente vistos como possíveis
locais de ligação (putative binding sites, do inglês) para os seus ligantes (ligands, do
inglês). De forma geral, os algoritmos de identificação de cavidades dividem-se em três
categorias principais: baseados em energia, geometria ou evolução. Os métodos evolutivos
baseiam-se em estimativas de conservação das sequências evolucionárias. Isto é,
estes métodos permitem detectar locais funcionais através do cálculo da conservação
evolutiva das posições dos aminoácidos das proteínas. Em relação aos métodos baseados
em energia estes baseiam-se no cálculo das energias de interação entre átomos
da proteína e do ligante. Por fim, os algoritmos geométricos baseiam-se na análise da
forma geométrica da proteína para identificar cavidades. Esta tese foca-se nos métodos
geométricos.
Apresentamos nesta tese três novos algoritmos geométricos para detecção de cavidades
em proteínas. A principal contribuição desta tese está no uso de técnicas de computação
gráfica na análise e reconhecimento de cavidades em proteínas, muito no espírito da
modelação e visualização molecular. Como pode ser visto mais à frente, estas técnicas
incluem o field-of-view (FoV), voxel ray casting, back-face culling, funções de diâmetro
de forma, a teoria de Morse, e os pontos críticos. A ideia principal é segmentar a
proteína, à semelhança do que acontece na segmentação de malhas em computação
gráfica. Na prática, os algoritmos de detecção de cavidades não são nada mais que
algoritmos de segmentação de proteínas
- …