1,481 research outputs found

    Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach

    Full text link
    This paper proposes a probabilistic approach for the detection and the tracking of particles in fluorescent time-lapse imaging. In the presence of a very noised and poor-quality data, particles and trajectories can be characterized by an a contrario model, that estimates the probability of observing the structures of interest in random data. This approach, first introduced in the modeling of human visual perception and then successfully applied in many image processing tasks, leads to algorithms that neither require a previous learning stage, nor a tedious parameter tuning and are very robust to noise. Comparative evaluations against a well-established baseline show that the proposed approach outperforms the state of the art.Comment: Published in Journal of Machine Vision and Application

    Isogeometric Analysis and thermomechanical Mortar contact problems

    Get PDF
    Cataloged from PDF version of article.Thermomechanical Mortar contact algorithms and their application to NURBS based Isogeometric Analysis are investigated in the context of nonlinear elasticity. Mortar methods are applied to both the mechanical field and the thermal field in order to model frictional contact, the energy transfer between the surfaces as well as the frictional heating. A series of simplifications are considered so that a wide range of established numerical techniques for Mortar methods such as segmentation can be transferred to IGA without modification. The performance of the proposed framework is illustrated with representative numerical examples. (C) 2014 Elsevier B.V. All rights reserved

    Topological visualization of tensor fields using a generalized Helmholtz decomposition

    Get PDF
    Analysis and visualization of fluid flow datasets has become increasing important with the development of computer graphics. Even though many direct visualization methods have been applied in the tensor fields, those methods may result in much visual clutter. The Helmholtz decomposition has been widely used to analyze and visualize the vector fields, and it is also a useful application in the topological analysis of vector fields. However, there has been no previous work employing the Helmholtz decomposition of tensor fields. We present a method for computing the Helmholtz decomposition of tensor fields of arbitrary order and demonstrate its application. The Helmholtz decomposition can split a tensor field into divergence-free and curl-free parts. The curl-free part is irrotational, and it is useful to isolate the local maxima and minima of divergence (foci of sources and sinks) in the tensor field without interference from curl-based features. And divergence-free part is solenoidal, and it is useful to isolate centers of vortices in the tensor field. Topological visualization using this decomposition can classify critical points of two-dimensional tensor fields and critical lines of 3D tensor fields. Compared with several other methods, this approach is not dependent on computing eigenvectors, tensor invariants, or hyperstreamlines, but it can be computed by solving a sparse linear system of equations based on finite difference approximation operators. Our approach is an indirect visualization method, unlike the direct visualization which may result in the visual clutter. The topological analysis approach also generates a single separating contour to roughly partition the tensor field into irrotational and solenoidal regions. Our approach will make use of the 2nd order and the 4th order tensor fields. This approach can provide a concise representation of the global structure of the field, and provide intuitive and useful information about the structure of tensor fields. However, this method does not extract the exact locations of critical points and lines

    Steklov Spectral Geometry for Extrinsic Shape Analysis

    Full text link
    We propose using the Dirichlet-to-Neumann operator as an extrinsic alternative to the Laplacian for spectral geometry processing and shape analysis. Intrinsic approaches, usually based on the Laplace-Beltrami operator, cannot capture the spatial embedding of a shape up to rigid motion, and many previous extrinsic methods lack theoretical justification. Instead, we consider the Steklov eigenvalue problem, computing the spectrum of the Dirichlet-to-Neumann operator of a surface bounding a volume. A remarkable property of this operator is that it completely encodes volumetric geometry. We use the boundary element method (BEM) to discretize the operator, accelerated by hierarchical numerical schemes and preconditioning; this pipeline allows us to solve eigenvalue and linear problems on large-scale meshes despite the density of the Dirichlet-to-Neumann discretization. We further demonstrate that our operators naturally fit into existing frameworks for geometry processing, making a shift from intrinsic to extrinsic geometry as simple as substituting the Laplace-Beltrami operator with the Dirichlet-to-Neumann operator.Comment: Additional experiments adde

    Doctor of Philosophy

    Get PDF
    dissertationWith modern computational resources rapidly advancing towards exascale, large-scale simulations useful for understanding natural and man-made phenomena are becoming in- creasingly accessible. As a result, the size and complexity of data representing such phenom- ena are also increasing, making the role of data analysis to propel science even more integral. This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields--an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single ""correct"" reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of ""correctness"" of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (time- independent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations

    Feature based estimation of myocardial motion from tagged MR images

    Get PDF
    In the past few years we witnessed an increase in mortality due to cancer relative to mortality due to cardiovascular diseases. In 2008, the Netherlands Statistics Agency reports that 33.900 people died of cancer against 33.100 deaths due to cardiovascular diseases, making cancer the number one cause of death in the Netherlands [33]. Even if the rate of people affected by heart diseases is continually rising, they "simply don’t die of it", according to the research director Prof. Mat Daemen of research institute CARIM of the University of Maastricht [50]. The reason for this is the early diagnosis, and the treatment of people with identified risk factors for diseases like ischemic heart disease, hypertrophic cardiomyopathy, thoracic aortic disease, pericardial (sac around the heart) disease, cardiac tumors, pulmonary artery disease, valvular disease, and congenital heart disease before and after surgical repair. Cardiac imaging plays a crucial role in the early diagnosis, since it allows the accurate investigation of a large amount of imaging data in a small amount of time. Moreover, cardiac imaging reduces costs of inpatient care, as has been shown in recent studies [77]. With this in mind, in this work we have provided several tools with the aim to help the investigation of the cardiac motion. In chapters 2 and 3 we have explored a novel variational optic flow methodology based on multi-scale feature points to extract cardiac motion from tagged MR images. Compared to constant brightness methods, this new approach exhibits several advantages. Although the intensity of critical points is also influenced by fading, critical points do retain their characteristic even in the presence of intensity changes, such as in MR imaging. In an experiment in section 5.4 we have applied this optic flow approach directly on tagged MR images. A visual inspection confirmed that the extracted motion fields realistically depicted the cardiac wall motion. The method exploits also the advantages from the multiscale framework. Because sparse velocity formulas 2.9, 3.7, 6.21, and 7.5 provide a number of equations equal to the number of unknowns, the method does not suffer from the aperture problem in retrieving velocities associated to the critical points. In chapters 2 and 3 we have moreover introduced a smoothness component of the optic flow equation described by means of covariant derivatives. This is a novelty in the optic flow literature. Many variational optic flow methods present a smoothness component that penalizes for changes from global assumptions such as isotropic or anisotropic smoothness. In the smoothness term proposed deviations from a predefined motion model are penalized. Moreover, the proposed optic flow equation has been decomposed in rotation-free and divergence-free components. This decomposition allows independent tuning of the two components during the vector field reconstruction. The experiments and the Table of errors provided in 3.8 showed that the combination of the smoothness term, influenced by a predefined motion model, and the Helmholtz decomposition in the optic flow equation reduces the average angular error substantially (20%-25%) with respect to a similar technique that employs only standard derivatives in the smoothness term. In section 5.3 we extracted the motion field of a phantom of which we know the ground truth of and compared the performance of this optic flow method with the performance of other optic flow methods well known in the literature, such as the Horn and Schunck [76] approach, the Lucas and Kanade [111] technique and the tuple image multi-scale optic flow constraint equation of Van Assen et al. [163]. Tests showed that the proposed optic flow methodology provides the smallest average angular error (AAE = 3.84 degrees) and L2 norm = 0.1. In this work we employed the Helmholtz decomposition also to study the cardiac behavior, since the vector field decomposition allows to investigate cardiac contraction and cardiac rotation independently. In chapter 4 we carried out an analysis of cardiac motion of ten volunteers and one patient where we estimated the kinetic energy for the different components. This decomposition is useful since it allows to visualize and quantify the contributions of each single vector field component to the heart beat. Local measurements of the kinetic energy have also been used to detect areas of the cardiac walls with little movement. Experiments on a patient and a comparison between a late enhancement cardiac image and an illustration of the cardiac kinetic energy on a bull’s eye plot illustrated that a correspondence between an infarcted area and an area with very small kinetic energy exists. With the aim to extend in the future the proposed optic flow equation to a 3D approach, in chapter 6 we investigated the 3D winding number approach as a tool to locate critical points in volume images. We simplified the mathematics involved with respect to a previous work [150] and we provided several examples and applications such as cardiac motion estimation from 3-dimensional tagged images, follicle and neuronal cell counting. Finally in chapter 7 we continued our investigation on volume tagged MR images, by retrieving the cardiac motion field using a 3-dimensional and simple version of the proposed optic flow equation based on standard derivatives. We showed that the retrieved motion fields display the contracting and rotating behavior of the cardiac muscle. We moreover extracted the through-plane component, which provides a realistic illustration of the vector field and is missed by 2-dimensional approaches

    Spectral, Combinatorial, and Probabilistic Methods in Analyzing and Visualizing Vector Fields and Their Associated Flows

    Get PDF
    In this thesis, we introduce several tools, each coming from a different branch of mathematics, for analyzing real vector fields and their associated flows. Beginning with a discussion about generalized vector field decompositions, that mainly have been derived from the classical Helmholtz-Hodge-decomposition, we decompose a field into a kernel and a rest respectively to an arbitrary vector-valued linear differential operator that allows us to construct decompositions of either toroidal flows or flows obeying differential equations of second (or even fractional) order and a rest. The algorithm is based on the fast Fourier transform and guarantees a rapid processing and an implementation that can be directly derived from the spectral simplifications concerning differentiation used in mathematics. Moreover, we present two combinatorial methods to process 3D steady vector fields, which both use graph algorithms to extract features from the underlying vector field. Combinatorial approaches are known to be less sensitive to noise than extracting individual trajectories. Both of the methods are extensions of an existing 2D technique to 3D fields. We observed that the first technique can generate overly coarse results and therefore we present a second method that works using the same concepts but produces more detailed results. Finally, we discuss several possibilities for categorizing the invariant sets with respect to the flow. Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In the frame of this work, we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies. Gauss\'' theorem, which relates the flow through a surface to the vector field inside the surface, is an important tool in flow visualization. We are exploiting the fact that the theorem can be further refined on polygonal cells and construct a process that encodes the particle movement through the boundary facets of these cells using transition matrices. By pure power iteration of transition matrices, various topological features, such as separation and invariant sets, can be extracted without having to rely on the classical techniques, e.g., interpolation, differentiation and numerical streamline integration
    • …
    corecore