1,344 research outputs found

    Human perception-oriented segmentation for triangle meshes

    Get PDF
    A segmentação de malhas é um tópico importante de investigação em computação gráfica, em particular em modelação geométrica. Isto deve-se ao facto de as técnicas de segmentaçãodemalhasteremváriasaplicações,nomeadamentenaproduçãodefilmes, animaçãoporcomputador, realidadevirtual, compressãodemalhas, assimcomoemjogosdigitais. Emconcreto, asmalhastriangularessãoamplamenteusadasemaplicações interativas, visto que sua segmentação em partes significativas (também designada por segmentação significativa, segmentação perceptiva ou segmentação perceptualmente significativa ) é muitas vezes vista como uma forma de acelerar a interação com o utilizador ou a deteção de colisões entre esses objetos 3D definidos por uma malha, bem como animar uma ou mais partes significativas (por exemplo, a cabeça de uma personagem) de um dado objeto, independentemente das restantes partes. Acontece que não se conhece nenhuma técnica capaz de segmentar correctamente malhas arbitrárias −ainda que restritas aos domínios de formas livres e não-livres− em partes significativas. Algumas técnicas são mais adequadas para objetos de forma não-livre (por exemplo, peças mecânicas definidas geometricamente por quádricas), enquanto outras são mais talhadas para o domínio dos objectos de forma livre. Só na literatura recente surgem umas poucas técnicas que se aplicam a todo o universo de objetos de forma livre e não-livre. Pior ainda é o facto de que a maioria das técnicas de segmentação não serem totalmente automáticas, no sentido de que quase todas elas exigem algum tipo de pré-requisitos e assistência do utilizador. Resumindo, estes três desafios relacionados com a proximidade perceptual, generalidade e automação estão no cerne do trabalho descrito nesta tese. Para enfrentar estes desafios, esta tese introduz o primeiro algoritmo de segmentação baseada nos contornos ou fronteiras dos segmentos, cuja técnica se inspira nas técnicas de segmentação baseada em arestas, tão comuns em análise e processamento de imagem,porcontraposiçãoàstécnicasesegmentaçãobaseadaemregiões. Aideiaprincipal é a de encontrar em primeiro lugar a fronteira de cada região para, em seguida, identificar e agrupar todos os seus triângulos internos. As regiões da malha encontradas correspondem a saliências e reentrâncias, que não precisam de ser estritamente convexas, nem estritamente côncavas, respectivamente. Estas regiões, designadas regiões relaxadamenteconvexas(ousaliências)eregiõesrelaxadamentecôncavas(oureentrâncias), produzem segmentações que são menos sensíveis ao ruído e, ao mesmo tempo, são mais intuitivas do ponto de vista da perceção humana; por isso, é designada por segmentação orientada à perceção humana (ou, human perception- oriented (HPO), do inglês). Além disso, e ao contrário do atual estado-da-arte da segmentação de malhas, a existência destas regiões relaxadas torna o algoritmo capaz de segmentar de maneira bastante plausível tanto objectos de forma não-livre como objectos de forma livre. Nesta tese, enfrentou-se também um quarto desafio, que está relacionado com a fusão de segmentação e multi-resolução de malhas. Em boa verdade, já existe na literatura uma variedade grande de técnicas de segmentação, bem como um número significativo de técnicas de multi-resolução, para malhas triangulares. No entanto, não é assim tão comum encontrar estruturas de dados e algoritmos que façam a fusão ou a simbiose destes dois conceitos, multi-resolução e segmentação, num único esquema multi-resolução que sirva os propósitos das aplicações que lidam com malhas simples e segmentadas, sendo que neste contexto se entende que uma malha simples é uma malha com um único segmento. Sendo assim, nesta tese descreve-se um novo esquema (entenda-seestruturasdedadosealgoritmos)demulti-resoluçãoesegmentação,designado por extended Ghost Cell (xGC). Este esquema preserva a forma das malhas, tanto em termos globais como locais, ou seja, os segmentos da malha e as suas fronteiras, bem como os seus vincos e ápices são preservados, não importa o nível de resolução que usamos durante a/o simplificação/refinamento da malha. Além disso, ao contrário de outros esquemas de segmentação, tornou-se possível ter segmentos adjacentes com dois ou mais níveis de resolução de diferença. Isto é particularmente útil em animação por computador, compressão e transmissão de malhas, operações de modelação geométrica, visualização científica e computação gráfica. Em suma, esta tese apresenta um esquema genérico, automático, e orientado à percepção humana, que torna possível a simbiose dos conceitos de segmentação e multiresolução de malhas trianguladas que sejam representativas de objectos 3D.The mesh segmentation is an important topic in computer graphics, in particular in geometric computing. This is so because mesh segmentation techniques find many applications in movies, computer animation, virtual reality, mesh compression, and games. Infact, trianglemeshesarewidelyusedininteractiveapplications, sothattheir segmentation in meaningful parts (i.e., human-perceptually segmentation, perceptive segmentationormeaningfulsegmentation)isoftenseenasawayofspeedinguptheuser interaction, detecting collisions between these mesh-covered objects in a 3D scene, as well as animating one or more meaningful parts (e.g., the head of a humanoid) independently of the other parts of a given object. It happens that there is no known technique capable of correctly segmenting any mesh into meaningful parts. Some techniques are more adequate for non-freeform objects (e.g., quadricmechanicalparts), whileothersperformbetterinthedomainoffreeform objects. Only recently, some techniques have been developed for the entire universe of objects and shapes. Even worse it is the fact that most segmentation techniques are not entirely automated in the sense that almost all techniques require some sort of pre-requisites and user assistance. Summing up, these three challenges related to perceptual proximity, generality and automation are at the core of the work described in this thesis. In order to face these challenges, we have developed the first contour-based mesh segmentation algorithm that we may find in the literature, which is inspired in the edgebased segmentation techniques used in image analysis, as opposite to region-based segmentation techniques. Its leading idea is to firstly find the contour of each region, and then to identify and collect all of its inner triangles. The encountered mesh regions correspond to ups and downs, which do not need to be strictly convex nor strictly concave, respectively. These regions, called relaxedly convex regions (or saliences) and relaxedly concave regions (or recesses), produce segmentations that are less-sensitive to noise and, at the same time, are more intuitive from the human point of view; hence it is called human perception- oriented (HPO) segmentation. Besides, and unlike the current state-of-the-art in mesh segmentation, the existence of these relaxed regions makes the algorithm suited to both non-freeform and freeform objects. In this thesis, we have also tackled a fourth challenge, which is related with the fusion of mesh segmentation and multi-resolution. Truly speaking, a plethora of segmentation techniques, as well as a number of multiresolution techniques, for triangle meshes already exist in the literature. However, it is not so common to find algorithms and data structures that fuse these two concepts, multiresolution and segmentation, into a symbiotic multi-resolution scheme for both plain and segmented meshes, in which a plainmeshisunderstoodasameshwithasinglesegment. So, weintroducesuchanovel multiresolution segmentation scheme, called extended Ghost Cell (xGC) scheme. This scheme preserves the shape of the meshes in both global and local terms, i.e., mesh segments and their boundaries, as well as creases and apices are preserved, no matter the level of resolution we use for simplification/refinement of the mesh. Moreover, unlike other segmentation schemes, it was made possible to have adjacent segments with two or more resolution levels of difference. This is particularly useful in computer animation, mesh compression and transmission, geometric computing, scientific visualization, and computer graphics. In short, this thesis presents a fully automatic, general, and human perception-oriented scheme that symbiotically integrates the concepts of mesh segmentation and multiresolution

    Idealized computational models for auditory receptive fields

    Full text link
    This paper presents a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to enable invariance of receptive field responses under natural sound transformations and ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or the combination of a time-causal generalized Gammatone filter over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table

    PHYSICS-AWARE MODEL SIMPLIFICATION FOR INTERACTIVE VIRTUAL ENVIRONMENTS

    Get PDF
    Rigid body simulation is an integral part of Virtual Environments (VE) for autonomous planning, training, and design tasks. The underlying physics-based simulation of VE must be accurate and computationally fast enough for the intended application, which unfortunately are conflicting requirements. Two ways to perform fast and high fidelity physics-based simulation are: (1) model simplification, and (2) parallel computation. Model simplification can be used to allow simulation at an interactive rate while introducing an acceptable level of error. Currently, manual model simplification is the most common way of performing simulation speedup but it is time consuming. Hence, in order to reduce the development time of VEs, automated model simplification is needed. The dissertation presents an automated model simplification approach based on geometric reasoning, spatial decomposition, and temporal coherence. Geometric reasoning is used to develop an accessibility based algorithm for removing portions of geometric models that do not play any role in rigid body to rigid body interaction simulation. Removing such inaccessible portions of the interacting rigid body models has no influence on the simulation accuracy but reduces computation time significantly. Spatial decomposition is used to develop a clustering algorithm that reduces the number of fluid pressure computations resulting in significant speedup of rigid body and fluid interaction simulation. Temporal coherence algorithm reuses the computed force values from rigid body to fluid interaction based on the coherence of fluid surrounding the rigid body. The simulations are further sped up by performing computing on graphics processing unit (GPU). The dissertation also presents the issues pertaining to the development of parallel algorithms for rigid body simulations both on multi-core processors and GPU. The developed algorithms have enabled real-time, high fidelity, six degrees of freedom, and time domain simulation of unmanned sea surface vehicles (USSV) and can be used for autonomous motion planning, tele-operation, and learning from demonstration applications

    Modeling EMI Resulting from a Signal Via Transition Through Power/Ground Layers

    Get PDF
    Signal transitioning through layers on vias are very common in multi-layer printed circuit board (PCB) design. For a signal via transitioning through the internal power and ground planes, the return current must switch from one reference plane to another reference plane. The discontinuity of the return current at the via excites the power and ground planes, and results in noise on the power bus that can lead to signal integrity, as well as EMI problems. Numerical methods, such as the finite-difference time-domain (FDTD), Moment of Methods (MoM), and partial element equivalent circuit (PEEC) method, were employed herein to study this problem. The modeled results are supported by measurements. In addition, a common EMI mitigation approach of adding a decoupling capacitor was investigated with the FDTD method

    Parallel Mesh Processing

    Get PDF
    Die aktuelle Forschung im Bereich der Computergrafik versucht den zunehmenden Ansprüchen der Anwender gerecht zu werden und erzeugt immer realistischer wirkende Bilder. Dementsprechend werden die Szenen und Verfahren, die zur Darstellung der Bilder genutzt werden, immer komplexer. So eine Entwicklung ist unweigerlich mit der Steigerung der erforderlichen Rechenleistung verbunden, da die Modelle, aus denen eine Szene besteht, aus Milliarden von Polygonen bestehen können und in Echtzeit dargestellt werden müssen. Die realistische Bilddarstellung ruht auf drei Säulen: Modelle, Materialien und Beleuchtung. Heutzutage gibt es einige Verfahren für effiziente und realistische Approximation der globalen Beleuchtung. Genauso existieren Algorithmen zur Erstellung von realistischen Materialien. Es gibt zwar auch Verfahren für das Rendering von Modellen in Echtzeit, diese funktionieren aber meist nur für Szenen mittlerer Komplexität und scheitern bei sehr komplexen Szenen. Die Modelle bilden die Grundlage einer Szene; deren Optimierung hat unmittelbare Auswirkungen auf die Effizienz der Verfahren zur Materialdarstellung und Beleuchtung, so dass erst eine optimierte Modellrepräsentation eine Echtzeitdarstellung ermöglicht. Viele der in der Computergrafik verwendeten Modelle werden mit Hilfe der Dreiecksnetze repräsentiert. Das darin enthaltende Datenvolumen ist enorm, um letztlich den Detailreichtum der jeweiligen Objekte darstellen bzw. den wachsenden Realitätsanspruch bewältigen zu können. Das Rendern von komplexen, aus Millionen von Dreiecken bestehenden Modellen stellt selbst für moderne Grafikkarten eine große Herausforderung dar. Daher ist es insbesondere für die Echtzeitsimulationen notwendig, effiziente Algorithmen zu entwickeln. Solche Algorithmen sollten einerseits Visibility Culling1, Level-of-Detail, (LOD), Out-of-Core Speicherverwaltung und Kompression unterstützen. Anderseits sollte diese Optimierung sehr effizient arbeiten, um das Rendering nicht noch zusätzlich zu behindern. Dies erfordert die Entwicklung paralleler Verfahren, die in der Lage sind, die enorme Datenflut effizient zu verarbeiten. Der Kernbeitrag dieser Arbeit sind neuartige Algorithmen und Datenstrukturen, die speziell für eine effiziente parallele Datenverarbeitung entwickelt wurden und in der Lage sind sehr komplexe Modelle und Szenen in Echtzeit darzustellen, sowie zu modellieren. Diese Algorithmen arbeiten in zwei Phasen: Zunächst wird in einer Offline-Phase die Datenstruktur erzeugt und für parallele Verarbeitung optimiert. Die optimierte Datenstruktur wird dann in der zweiten Phase für das Echtzeitrendering verwendet. Ein weiterer Beitrag dieser Arbeit ist ein Algorithmus, welcher in der Lage ist, einen sehr realistisch wirkenden Planeten prozedural zu generieren und in Echtzeit zu rendern

    Unstructured un-split geometrical Volume-of-Fluid methods -- A review

    Full text link
    Geometrical Volume-of-Fluid (VoF) methods mainly support structured meshes, and only a small number of contributions in the scientific literature report results with unstructured meshes and three spatial dimensions. Unstructured meshes are traditionally used for handling geometrically complex solution domains that are prevalent when simulating problems of industrial relevance. However, three-dimensional geometrical operations are significantly more complex than their two-dimensional counterparts, which is confirmed by the ratio of publications with three-dimensional results on unstructured meshes to publications with two-dimensional results or support for structured meshes. Additionally, unstructured meshes present challenges in serial and parallel computational efficiency, accuracy, implementation complexity, and robustness. Ongoing research is still very active, focusing on different issues: interface positioning in general polyhedra, estimation of interface normal vectors, advection accuracy, and parallel and serial computational efficiency. This survey tries to give a complete and critical overview of classical, as well as contemporary geometrical VOF methods with concise explanations of the underlying ideas and sub-algorithms, focusing primarily on unstructured meshes and three dimensional calculations. Reviewed methods are listed in historical order and compared in terms of accuracy and computational efficiency

    Computational methods for 3D imaging of neural activity in light-field microscopy

    Get PDF
    Light Field Microscopy (LFM) is a 3D imaging technique that captures spatial and angular information from light in a single snapshot. LFM is an appealing technique for applications in biological imaging due to its relatively simple implementation and fast 3D imaging speed. For instance, LFM can help to understand how neurons process information, as shown for functional neuronal calcium imaging. However, traditional volume reconstruction approaches for LFM suffer from low lateral resolution, high computational cost, and reconstruction artifacts near the native object plane. Therefore, in this thesis, we propose computational methods to improve the reconstruction performance of 3D imaging for LFM with applications to imaging neural activity. First, we study the image formation process and propose methods for discretization and simplification of the LF system. Typical approaches for discretization are performed by computing the discrete impulse response at different input locations defined by a sampling grid. Unlike conventional methods, we propose an approach that uses shift-invariant subspaces to generalize the discretization framework used in LFM. Our approach allows the selection of diverse sampling kernels and sampling intervals. Furthermore, the typical discretization method is a particular case of our formulation. Moreover, we propose a description of the system based on filter banks that fit the physics of the system. The periodic-shift invariant property per depth guarantees that the system can be accurately described by using filter banks. This description leads to a novel method to reduce the computational time using singular value decomposition (SVD). Our simplification method capitalizes on the inherent low-rank behaviour of the system. Furthermore, we propose rearranging our filter-bank model into a linear convolution neural network (CNN) that allows more convenient implementation using existing deep-learning software. Then, we study the problem of 3D reconstruction from single light-field images. We propose the shift-invariant-subspace assumption as a prior for volume reconstruction under ideal conditions. We experimentally show that artifact-free reconstruction (aliasing-free) is achievable under these settings. Furthermore, the tools developed to study the forward model are exploited to design a reconstruction algorithm based on ADMM that allows artifact-free 3D reconstruction for real data. Contrary to traditional approaches, our method includes additional priors for reconstruction without dramatically increasing the computational complexity. We extensively evaluate our approach on synthetic and real data and show that our approach performs better than conventional model-based strategies in computational time, image quality, and artifact reduction. Finally, we exploit deep-learning techniques for reconstruction. Specifically, we propose to use two-photon imaging to enhance the performance of LFM when imaging neurons in brain tissues. The architecture of our network is derived from a sparsity-based algorithm for reconstruction named Iterative Shrinkage and Thresholding Algorithm (ISTA). Furthermore, we propose a semi-supervised training based on Generative Adversarial Neural Networks (GANs) that exploits the knowledge of the forward model to achieve remarkable reconstruction quality. We propose efficient architectures to compute the forward model using linear CNNs. This description allows fast computation of the forward model and complements our reconstruction approach. Our method is tested under adverse conditions: lack of training data, background noise, and non-transparent samples. We experimentally show that our method performs better than model-based reconstruction strategies and typical neural networks for imaging neuronal activity in mammalian brain tissue. Our approach enjoys both the robustness of the model-based methods and the reconstruction speed of deep learning.Open Acces

    Workshop on the Integration of Finite Element Modeling with Geometric Modeling

    Get PDF
    The workshop on the Integration of Finite Element Modeling with Geometric Modeling was held on 12 May 1987. It was held to discuss the geometric modeling requirements of the finite element modeling process and to better understand the technical aspects of the integration of these two areas. The 11 papers are presented except for one for which only the abstract is given

    Signal Processing on Textured Meshes

    Get PDF
    In this thesis we extend signal processing techniques originally formulated in the context of image processing to techniques that can be applied to signals on arbitrary triangles meshes. We develop methods for the two most common representations of signals on triangle meshes: signals sampled at the vertices of a finely tessellated mesh, and signals mapped to a coarsely tessellated mesh through texture maps. Our first contribution is the combination of Lagrangian Integration and the Finite Elements Method in the formulation of two signal processing tasks: Shock Filters for texture and geometry sharpening, and Optical Flow for texture registration. Our second contribution is the formulation of Gradient-Domain processing within the texture atlas. We define a function space that handles chart discontinuities, and linear operators that capture the metric distortion introduced by the parameterization. Our third contribution is the construction of a spatiotemporal atlas parameterization for evolving meshes. Our method introduces localized remeshing operations and a compact parameterization that improves geometry and texture video compression. We show temporally coherent signal processing using partial correspondences
    • …
    corecore