27 research outputs found

    Higher-Order Regularization in Computer Vision

    Get PDF
    At the core of many computer vision models lies the minimization of an objective function consisting of a sum of functions with few arguments. The order of the objective function is defined as the highest number of arguments of any summand. To reduce ambiguity and noise in the solution, regularization terms are included into the objective function, enforcing different properties of the solution. The most commonly used regularization is penalization of boundary length, which requires a second-order objective function. Most of this thesis is devoted to introducing higher-order regularization terms and presenting efficient minimization schemes. One of the topics of the thesis covers a reformulation of a large class of discrete functions into an equivalent form. The reformulation is shown, both in theory and practical experiments, to be advantageous for higher-order regularization models based on curvature and second-order derivatives. Another topic is the parametric max-flow problem. An analysis is given, showing its inherent limitations for large-scale problems which are common in computer vision. The thesis also introduces a segmentation approach for finding thin and elongated structures in 3D volumes. Using a line-graph formulation, it is shown how to efficiently regularize with respect to higher-order differential geometric properties such as curvature and torsion. Furthermore, an efficient optimization approach for a multi-region model is presented which, in addition to standard regularization, is able to enforce geometric constraints such as inclusion or exclusion of different regions. The final part of the thesis deals with dense stereo estimation. A new regularization model is introduced, penalizing the second-order derivatives of a depth or disparity map. Compared to previous second-order approaches to dense stereo estimation, the new regularization model is shown to be more easily optimized

    Generalized fast marching method for computing highest threatening trajectories with curvature constraints and detection ambiguities in distance and radial speed

    Get PDF
    Work presented at the 9th Conference on Curves and Surfaces, 2018, ArcachonWe present a recent numerical method devoted to computing curves that globally minimize an energy featuring both a data driven term, and a second order curvature penalizing term. Applications to image segmentation are discussed. We then describe in detail recent progress on radar network configuration, in which the optimal curves represent an opponent's trajectories

    Locally Adaptive Frames in the Roto-Translation Group and their Applications in Medical Imaging

    Get PDF
    Locally adaptive differential frames (gauge frames) are a well-known effective tool in image analysis, used in differential invariants and PDE-flows. However, at complex structures such as crossings or junctions, these frames are not well-defined. Therefore, we generalize the notion of gauge frames on images to gauge frames on data representations U:RdSd1RU:\mathbb{R}^{d} \rtimes S^{d-1} \to \mathbb{R} defined on the extended space of positions and orientations, which we relate to data on the roto-translation group SE(d)SE(d), d=2,3d=2,3. This allows to define multiple frames per position, one per orientation. We compute these frames via exponential curve fits in the extended data representations in SE(d)SE(d). These curve fits minimize first or second order variational problems which are solved by spectral decomposition of, respectively, a structure tensor or Hessian of data on SE(d)SE(d). We include these gauge frames in differential invariants and crossing preserving PDE-flows acting on extended data representation UU and we show their advantage compared to the standard left-invariant frame on SE(d)SE(d). Applications include crossing-preserving filtering and improved segmentations of the vascular tree in retinal images, and new 3D extensions of coherence-enhancing diffusion via invertible orientation scores

    A Fast Method to Segment Images with Additive Intensity Value

    Get PDF
    Master'sMASTER OF SCIENC

    Missing Surface Estimation Based on Modified Tikhonov Regularization: Application for Destructed Dental Tissue

    Get PDF
    Estimation of missing digital information is mostly addressed by 1- or 2-D signal processing methods; however, this problem can emerge in multi-dimensional data including 3-D images. Examples of 3-D images dealing with missing edge information are often found using dental micro-CT, where the natural contours of dental enamel and dentine are partially dissolved or lost by caries. In this paper, we present a novel sequential approach to estimate the missing surface of an object. First, an initial correct contour is determined interactively or automatically, for the starting slice. This contour information defines the local search area and provides the overall estimation pattern for the edge candidates in the next slice. The search for edge candidates in the next slice is performed in the perpendicular direction to the obtained initial edge in order to find and label the corrupted edge candidates. Subsequently, the location information of both initial and nominated edge candidates are transformed and segregated into two independent signals (X-coordinates and Y-coordinates) and the problem is changed into error concealment. In the next step, the missing samples of these signals are estimated using a modified Tikhonov regularization model with two new terms. One term contributes in the denoising of the corrupted signal by defining an estimation model for a group of mildly destructed samples, and the other term contributes in the estimation of the missing samples with the highest similarity to the samples of the obtained signals from the previous slice. Finally, the reconstructed signals are transformed inversely to edge pixel representation. The estimated edges in each slice are considered as initial edge information for the next slice, and this procedure is repeated slice by slice until the entire contour of the destructed surface is estimated. The visual results as well as quantitative results (using both contour-based and area-based metrics) for seven image data sets of tooth samples with considerable destruction of the dentin-enamel junction demonstrates that the proposed method can accurately interpolate the shape and the position of the missing surfaces in computed tomography images in both two and 3-D (e.g., 14.87 ± 3.87 μm\mu \text{m} of mean distance (MD) error for the proposed method versus 7.33 ± 0.27 μm\mu \text{m} of MD error between human experts and 1.25± ~ 0 % error rate (ER) of the proposed method versus 0.64± ~

    Preoperative Systems for Computer Aided Diagnosis based on Image Registration: Applications to Breast Cancer and Atherosclerosis

    Get PDF
    Computer Aided Diagnosis (CAD) systems assist clinicians including radiologists and cardiologists to detect abnormalities and highlight conspicuous possible disease. Implementing a pre-operative CAD system contains a framework that accepts related technical as well as clinical parameters as input by analyzing the predefined method and demonstrates the prospective output. In this work we developed the Computer Aided Diagnostic System for biomedical imaging analysis of two applications on Breast Cancer and Atherosclerosis. The aim of the first CAD application is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. Base on the fact that automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70±0.03, 0.74±0.03 and 0.81±0.09 (out of 1) for Affine, BSpline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation. The aim of the second implemented CAD application is to assess contribution of calcification in plaque vulnerability and wall rupture and to find its maximum resistance before break in image-based models of carotid artery stenting. The role of calcification inside fibroatheroma during carotid artery stenting operation is controversial in which cardiologists face two major problems during the placement: (i) “plaque protrusion” (i.e. elastic fibrous caps containing early calcifications that penetrate inside the stent); (ii) “plaque vulnerability” (i.e. stiff plaques with advanced calcifications that break the arterial wall or stent). Finite Element Analysis was used to simulate the balloon and stent expansion as a preoperative patient-specific virtual framework. A nonlinear static structural analysis was performed on 20 patients acquired using in vivo MDCT angiography. The Agatston Calcium score was obtained for each patient and subject-specific local Elastic Modulus (EM) was calculated. The in silico results showed that by imposing average ultimate external load of 1.1MPa and 2.3MPa on balloon and stent respectively, average ultimate stress of 55.7±41.2kPa and 171±41.2kPa are obtained on calcifications. The study reveals that a significant positive correlation (R=0.85, p<0.0001) exists on stent expansion between EM of calcification and ultimate stress as well as Plaque Wall Stress (PWS) (R=0.92, p<0.0001), comparing to Ca score that showed insignificant associations with ultimate stress (R=0.44, p=0.057) and PWS (R=0.38, p=0.103), suggesting minor impact of Ca score in plaque rupture. These average data are in good agreement with results obtained by other research groups and we believe this approach enriches the arsenal of tools available for pre-operative prediction of carotid artery stenting procedure in the presence of calcified plaques

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces
    corecore