109 research outputs found

    Simultaneous Segmentation and Filtering Via Reduced Graph Cuts

    Get PDF
    12 pagesInternational audienceRecently, optimization with graph cuts became very attractive but generally remains limited to small-scale problems due to the large memory requirement of graphs, even when restricted to binary variables. Unlike previous heuristics which generally fail to fully capture details, another band-based method was proposed for reducing these graphs in image segmentation. This method provides small graphs while preserving thin structures but do not offer low memory usage when the amount of regularization is large. This is typically the case when images are corrupted by an impulsive noise. In this paper, we overcome this situation by embedding a new parameter in this method to both further reducing graphs and filtering the segmentation. This parameter avoids any post-processing steps, appears to be generally less sensitive to noise variations and offers a good robustness against noise. We also provide an empirical way to automatically tune this parameter and illustrate its behavior for segmenting grayscale and color images

    2D Phase Unwrapping via Graph Cuts

    Get PDF
    Phase imaging technologies such as interferometric synthetic aperture radar (InSAR), magnetic resonance imaging (MRI), or optical interferometry, are nowadays widespread and with an increasing usage. The so-called phase unwrapping, which consists in the in- ference of the absolute phase from the modulo-2π phase, is a critical step in many of their processing chains, yet still one of its most challenging problems. We introduce an en- ergy minimization based approach to 2D phase unwrapping. In this approach we address the problem by adopting a Bayesian point of view and a Markov random field (MRF) to model the phase. The maximum a posteriori estimation of the absolute phase gives rise to an integer optimization problem, for which we introduce a family of efficient algo- rithms based on existing graph cuts techniques. We term our approach and algorithms PUMA, for Phase Unwrapping MAx flow. As long as the prior potential of the MRF is convex, PUMA guarantees an exact global solution. In particular it solves exactly all the minimum L p norm (p ≥ 1) phase unwrapping problems, unifying in that sense, a set of existing independent algorithms. For non convex potentials we introduce a version of PUMA that, while yielding only approximate solutions, gives very useful phase unwrap- ping results. The main characteristic of the introduced solutions is the ability to blindly preserve discontinuities. Extending the previous versions of PUMA, we tackle denoising by exploiting a multi-precision idea, which allows us to use the same rationale both for phase unwrapping and denoising. Finally, the last presented version of PUMA uses a frequency diversity concept to unwrap phase images having large phase rates. A representative set of experiences illustrates the performance of PUMA

    A markovian approach to unsupervised change detection with multiresolution and multimodality SAR data

    Get PDF
    In the framework of synthetic aperture radar (SAR) systems, current satellite missions make it possible to acquire images at very high and multiple spatial resolutions with short revisit times. This scenario conveys a remarkable potential in applications to, for instance, environmental monitoring and natural disaster recovery. In this context, data fusion and change detection methodologies play major roles. This paper proposes an unsupervised change detection algorithmfor the challenging case of multimodal SAR data collected by sensors operating atmultiple spatial resolutions. The method is based on Markovian probabilistic graphical models, graph cuts, linear mixtures, generalized Gaussian distributions, Gram-Charlier approximations, maximum likelihood and minimum mean squared error estimation. It benefits from the SAR images acquired at multiple spatial resolutions and with possibly different modalities on the considered acquisition times to generate an output change map at the finest observed resolution. This is accomplished by modeling the statistics of the data at the various spatial scales through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images that are defined on the pixel grid at the finest resolution and would be collected if all the sensors could work at that resolution. A Markov random field framework is adopted to address the detection problem by defining an appropriate multimodal energy function that is minimized using graph cuts

    Mass & secondary structure propensity of amino acids explain their mutability and evolutionary replacements

    Get PDF
    Why is an amino acid replacement in a protein accepted during evolution? The answer given by bioinformatics relies on the frequency of change of each amino acid by another one and the propensity of each to remain unchanged. We propose that these replacement rules are recoverable from the secondary structural trends of amino acids. A distance measure between high-resolution Ramachandran distributions reveals that structurally similar residues coincide with those found in substitution matrices such as BLOSUM: Asn Asp, Phe Tyr, Lys Arg, Gln Glu, Ile Val, Met → Leu; with Ala, Cys, His, Gly, Ser, Pro, and Thr, as structurally idiosyncratic residues. We also found a high average correlation (\overline{R} R = 0.85) between thirty amino acid mutability scales and the mutational inertia (I X ), which measures the energetic cost weighted by the number of observations at the most probable amino acid conformation. These results indicate that amino acid substitutions follow two optimally-efficient principles: (a) amino acids interchangeability privileges their secondary structural similarity, and (b) the amino acid mutability depends directly on its biosynthetic energy cost, and inversely with its frequency. These two principles are the underlying rules governing the observed amino acid substitutions. © 2017 The Author(s)

    Measuring Uncertainty in Graph Cut Solutions

    Get PDF
    In recent years graph cuts have become a popular tool for performing inference in Markov and conditional random fields. In this context the question arises as to whether it might be possible to compute a measure of uncertainty associated with the graph cut solutions. In this paper we answer this particular question by showing how the min-marginals associated with the label assignments of a random field can be efficiently computed using a new algorithm based on dynamic graph cuts. The min-marginal energies obtained by our proposed algorithm are exact, as opposed to the ones obtained from other inference algorithms like loopy belief propagation and generalized belief propagation. The paper also shows how min-marginals can be used for parameter learning in conditional random fields

    Color Separation for Background Subtraction

    Get PDF
    Background subtraction is a vital step in many computer vision systems. In background subtraction, one is given two (or more) frames of a video sequence taken with a still camera. Due to the stationarity of the camera, any color change in the scene is mainly due to the presence of moving objects. The goal of background subtraction is to separate the moving objects (also called the foreground) from the stationary background. Many background subtraction approaches have been proposed over the years. They are usually composed of two distinct stages, background modeling and foreground detection. Most of the standard background subtraction techniques focus on the background modeling. In the thesis, we focus on the improvement of foreground detection performance. We formulate the background subtraction as a pixel labeling problem, where the goal is to assign each image pixel either a foreground or background labels. We solve the pixel labeling problem using a principled energy minimization framework. We design an energy function composed of three terms: the data, smoothness, and color separation terms. The data term is based on motion information between image frames. The smoothness term encourages the foreground and background regions to have spatially coherent boundaries. These two terms have been used for background subtraction before. The main contribution of this thesis is the introduction of a new color separation term into the energy function for background subtraction. This term models the fact that the foreground and background regions tend to have different colors. Thus, introducing a color separation term encourages foreground and background regions not to share the same colors. Color separation term can help to correct the mistakes made due to the data term when the motion information is not entirely reliable. We model color separation term with L1 distance, using the technique developed by Tang et.al. Color clustering is used to efficiently model the color space. Our energy function can be globally and efficiently optimized with graph cuts, which is a very effective method for solving binary energy minimization problems arising in computer vision. To prove the effectiveness of including the color separation term into the energy function for background subtraction, we conduct experiments on standard datasets. Our model depends on color clustering and background modeling. There are many possible ways to perform color clustering and background modeling. We evaluate several different combinations of popular color clustering and background modeling approaches. We find that incorporating spatial and motion information as part of the color clustering process can further improve the results. The best performance of our approach is 97% compared to the approach without color separation that achieves 90%

    Methods for multiloop calculations and Higgs boson production at the LHC

    Get PDF
    The main topics of this thesis are Higgs boson production and the program package TopoID. We calculated results for all collinear counterterms up to N3LO. For a particular class of triple-real integrals we obtained results with full dependence on x. TopoID is designed to be a process independent tool for topology identification, FORM code generation and finding non-trivial relations among integrals that remain after applying a reduction algorithm

    Shape from inconsistent silhouette: Reconstruction of objects in the presence of segmentation and camera calibration error

    Get PDF
    Silhouettes are useful features to reconstruct the object shape when the object is textureless or the shape classes of objects are unknown. In this dissertation, we explore the problem of reconstructing the shape of challenging objects from silhouettes under real-world conditions such as the presence of silhouette and camera calibration error. This problem is called the Shape from Inconsistent Silhouettes problem. A psuedo-Boolean cost function is formalized for this problem, which penalizes differences between the reconstruction images and the silhouette images, and the Shape from Inconsistent Silhouette problem is cast as a psuedo-Boolean minimization problem. We propose a memory and time efficient method to find a local minimum solution to the optimization problem, including heuristics that take into account the geometric nature of the problem. Our methods are demonstrated on a variety of challenging objects including humans and large, thin objects. We also compare our methods to the state-of-the-art by generating reconstructions of synthetic objects with induced error. ^ We also propose a method for correcting camera calibration error given silhouettes with segmentation error. Unlike other existing methods, our method allows camera calibration error to be corrected without camera placement constraints and allows for silhouette segmentation error. This is accomplished by a modified Iterative Closest Point algorithm which minimizes the difference between an initial reconstruction and the input silhouettes. We characterize the degree of error that can be corrected with synthetic datasets with increasing error, and demonstrate the ability of the camera calibration correction method in improving the reconstruction quality in several challenging real-world datasets
    • …
    corecore