87 research outputs found

    Regularizers for Vector-Valued Data and Labeling Problems in Image Processing

    No full text
    Дан обзор последних результатов в области регуляризаторов, основанных на полных вариациях, применительно к векторным данным. Результаты оказались полезными для хранения или улучшения мультимодальных данных и задач разметки на непрерывной области определения. Возможные регуляризаторы и их свойства рассматриваются в рамках единой модели.The review of recent developments on total variation-based regularizers is given with the emphasis on vector-valued data. These have been proven to be useful for restoring or enhancing data with multiple channels, and find particular use in relaxation techniques for labeling problems on continuous domains. The possible regularizers and their properties are considered in a unified framework.Наведено огляд останніх результатів у галузі регуляризаторів, що базуються на повних варіаціях, стосовно векторних даних. Результати виявилися корисними для зберігання та покращення мультимодальних даних і задач розмітки на неперервній області визначення. Можливі регуляризатори та їх властивості розглядаються в рамках єдиної моделі

    Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications

    No full text
    Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications

    Image labeling and grouping by minimizing linear functionals over cones

    Full text link
    We consider energy minimization problems related to image labeling, partitioning, and grouping, which typically show up at mid-level stages of computer vision systems. A common feature of these problems is their intrinsic combinatorial complexity from an optimization pointof-view. Rather than trying to compute the global minimum - a goal we consider as elusive in these cases - we wish to design optimization approaches which exhibit two relevant properties: First, in each application a solution with guaranteed degree of suboptimality can be computed. Secondly, the computations are based on clearly defined algorithms which do not comprise any (hidden) tuning parameters. In this paper, we focus on the second property and introduce a novel and general optimization technique to the field of computer vision which amounts to compute a sub optimal solution by just solving a convex optimization problem. As representative examples, we consider two binary quadratic energy functionals related to image labeling and perceptual grouping. Both problems can be considered as instances of a general quadratic functional in binary variables, which is embedded into a higher-dimensional space such that sub optimal solutions can be computed as minima of linear functionals over cones in that space (semidefinite programs). Extensive numerical results reveal that, on the average, sub optimal solutions can be computed which yield a gap below 5% with respect to the global optimum in case where this is known

    Fast parallel algorithms for a broad class of nonlinear variational diffusion approaches

    Get PDF
    Variational segmentation and nonlinear diffusion approaches have been very active research areas in the fields of image processing and computer vision during the last years. In the present paper, we review recent advances in the development of efficient numerical algorithms for these approaches. The performance of parallel implement at ions of these algorithms on general-purpose hardware is assessed. A mathematically clear connection between variational models and nonlinear diffusion filters is presented that allows to interpret one approach as an approximation of the other, and vice versa. Numerical results confirm that, depending on the parametrization, this approximation can be made quite accurate. Our results provide a perspective for uniform implement at ions of both nonlinear variational models and diffusion filters on parallel architectures

    Analysis of optical flow models in the framework of calculus of variations

    Get PDF
    In image sequence analysis, variational optical flow computations require the solution of a parameter dependent optimization problem with a data term and a regularizer. In this paper we study existence and uniqueness of the optimizers. Our studies rely on quasiconvex functionals on the spaces W¹,P(Ω, IRd), with p > 1, BV(Ω, IRd), BD(&Omeag;). The methods that are covered by our results include several existing techniques. Experiments are presented that illustrate the behavior of these approaches

    CT Image Segmentation Using FEM with Optimized Boundary Condition

    Get PDF
    The authors propose a CT image segmentation method using structural analysis that is useful for objects with structural dynamic characteristics. Motivation of our research is from the area of genetic activity. In order to reveal the roles of genes, it is necessary to create mutant mice and measure differences among them by scanning their skeletons with an X-ray CT scanner. The CT image needs to be manually segmented into pieces of the bones. It is a very time consuming to manually segment many mutant mouse models in order to reveal the roles of genes. It is desirable to make this segmentation procedure automatic. Although numerous papers in the past have proposed segmentation techniques, no general segmentation method for skeletons of living creatures has been established. Against this background, the authors propose a segmentation method based on the concept of destruction analogy. To realize this concept, structural analysis is performed using the finite element method (FEM), as structurally weak areas can be expected to break under conditions of stress. The contribution of the method is its novelty, as no studies have so far used structural analysis for image segmentation. The method's implementation involves three steps. First, finite elements are created directly from the pixels of a CT image, and then candidates are also selected in areas where segmentation is thought to be appropriate. The second step involves destruction analogy to find a single candidate with high strain chosen as the segmentation target. The boundary conditions for FEM are also set automatically. Then, destruction analogy is implemented by replacing pixels with high strain as background ones, and this process is iterated until object is decomposed into two parts. Here, CT image segmentation is demonstrated using various types of CT imagery

    A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems

    Get PDF
    International audienceSzeliski et al. published an influential study in 2006 on energy minimization methods for Markov Random Fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically , the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different car-dinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2,453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types

    Gender differences in the use of cardiovascular interventions in HIV-positive persons; the D:A:D Study

    Get PDF
    Peer reviewe
    corecore