403,874 research outputs found

    Hybrid Approaches for MRF Optimization: Combination of Stochastic and Deterministic Methods

    Get PDF
    ķ•™ģœ„ė…¼ė¬ø (ė°•ģ‚¬)-- ģ„œģšøėŒ€ķ•™źµ ėŒ€ķ•™ģ› : ģ „źø°Ā·ģ»“ķ“Øķ„°ź³µķ•™ė¶€, 2014. 2. ģ“ź²½ė¬“.Markov Random Field (MRF) models are of fundamental importance in computer vision. Many vision problems have been successfully formulated in MRF optimization. They include stereo matching, segmentation, denoising, and inpainting, to mention just a few. To solve them effectively, numerous algorithms have been developed. Although many of them produce good results for relatively easy problems, they are still unsatisfactory when it comes to more difficult MRF problems such as non-submodular energy functions, strongly coupled MRFs, and high-order clique potentials. In this dissertation, several optimization methods are proposed. The main idea of proposed methods is to combine stochastic and deterministic optimization methods. Stochastic methods encourage more exploration in the solution space. On the other hand, deterministic methods enable more efficient exploitation. By combining those two approaches, it is able to obtain better solution. To this end, two stochastic methodologies are exploited for the framework of combination: Markov chain Monte Carlo (MCMC) and stochastic approximation. First methodology is the MCMC. Based on MCMC framework, population based MCMC (Pop-MCMC), MCMC with General Deterministic algorithms (MCMC-GD), and fusion move driven MCMC (MCMC-F) are proposed. Although MCMC provides an elegant framework of which global convergence is provable, it has the slow convergence rate. To overcome, population-based framework and combination with deterministic methods are used. It thereby enables global moves by exchanging information between samples, which in turn, leads to faster mixing rate. In the view of optimization, it means that we can reach a lower energy state rapidly. Second methodology is the stochastic approximation. In stochastic approximation, the objective function for optimization is approximated in stochastic way. To apply this approach to MRF optimization, graph approximation scheme is proposed for the approximation of the energy function. By using this scheme, it alleviates the problem of non-submodularity and partial labeling. This stochastic approach framework is combined with graph cuts which is very efficient algorithm for easy MRF optimizations. By this combination, fusion with graph approximation-based proposals (GA-fusion) is developed. Extensive experiments support that the proposed algorithms are effective across different classes of energy functions. The proposed algorithms are applied in many different computer vision applications including stereo matching, photo montage, inpaining, image deconvolution, and texture restoration. Those algorithms are further analyzed on synthetic MRF problems while varying the difficulties of the problems as well as the parameters for each algorithm.1 Introduction 1 1.1 Markov random eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 MRF and Gibbs distribution . . . . . . . . . . . . . . . . . . 1 1.1.2 MAP estimation and energy minimization . . . . . . . . . . . 2 1.1.3 MRF formulation for computer vision problems . . . . . . . . 3 1.2 Optimizing energy function . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Markov chain Monte Carlo . . . . . . . . . . . . . . . . . . . 7 1.2.2 Stochastic approximation . . . . . . . . . . . . . . . . . . . . 8 1.3 combination of stochastic and deterministic methods . . . . . . . . . 9 1.4 Outline of dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Population-based MCMC 13 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.1 Swendsen-Wang Cuts . . . . . . . . . . . . . . . . . . . . . . 16 2.2.2 Population-based MCMC . . . . . . . . . . . . . . . . . . . . 19 2.3 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4.1 Segment-based stereo matching . . . . . . . . . . . . . . . . . 31 2.4.2 Parameter analysis . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3 MCMC Combined with General Deterministic Methods 47 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.3 Proposed algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Population-based sampling framework for MCMC-GD . . . . 53 3.3.2 Kernel design . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4.1 Analysis on synthetic MRF problems . . . . . . . . . . . . . . 60 3.4.2 Results on real problems . . . . . . . . . . . . . . . . . . . . . 75 3.4.3 Alternative approach: parallel anchor generation . . . . . . . 78 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4 Fusion Move Driven MCMC 89 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2 Proposed algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.2.1 Sampling-based optimization . . . . . . . . . . . . . . . . . . 91 4.2.2 MCMC combined with fusion move . . . . . . . . . . . . . . . 92 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5 Fusion with Graph Approximation 101 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.2 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.2.1 Graph cuts-based move-making algorithm . . . . . . . . . . . 104 5.2.2 Proposals for fusion approach . . . . . . . . . . . . . . . . . . 106 5.3 Proposed algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.3.1 Stochastic approximation . . . . . . . . . . . . . . . . . . . . 107 5.3.2 Graph approximation . . . . . . . . . . . . . . . . . . . . . . 108 5.3.3 Overall algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.3.4 Characteristics of approximated function . . . . . . . . . . . 110 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.1 Image deconvolution . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.2 Binary texture restoration . . . . . . . . . . . . . . . . . . . . 115 5.4.3 Analysis on synthetic problems . . . . . . . . . . . . . . . . . 118 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6 Conclusion 127 6.1 Summary and contribution of the dissertation . . . . . . . . . . . . . 127 6.2 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.2.1 MCMC without detailed balance . . . . . . . . . . . . . . . . 128 6.2.2 Stochastic approximation for higher-order MRF model . . . . 130 Bibliography 131 źµ­ė¬øģ“ˆė” 141Docto

    On the evolutionary optimisation of many conflicting objectives

    Get PDF
    This inquiry explores the effectiveness of a class of modern evolutionary algorithms, represented by Non-dominated Sorting Genetic Algorithm (NSGA) components, for solving optimisation tasks with many conflicting objectives. Optimiser behaviour is assessed for a grid of mutation and recombination operator configurations. Performance maps are obtained for the dual aims of proximity to, and distribution across, the optimal trade-off surface. Performance sweet-spots for both variation operators are observed to contract as the number of objectives is increased. Classical settings for recombination are shown to be suitable for small numbers of objectives but correspond to very poor performance for higher numbers of objectives, even when large population sizes are used. Explanations for this behaviour are offered via the concepts of dominance resistance and active diversity promotion

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressedā€”either explicitly or implicitlyā€”to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m Ɨ n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
    • ā€¦
    corecore