476 research outputs found

    동적 환경 디블러링을 위한 새로운 모델, 알로기즘, 그리고 해석에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 이경무.Blurring artifacts are the most common flaws in photographs. To remove these artifacts, many deblurring methods which restore sharp images from blurry ones have been studied considerably in the field of computational photography. However, state-of-the-art deblurring methods are based on a strong assumption that the captured scenes are static, and thus a great many things still remain to be done. In particular, these conventional methods fail to deblur blurry images captured in dynamic environments which have spatially varying blurs caused by various sources such as camera shake including out-of-plane motion, moving objects, depth variation, and so on. Therefore, the deblurring problem becomes more difficult and deeply challenging for dynamic scenes. Therefore, in this dissertation, addressing the deblurring problem of general dynamic scenes is a goal, and new solutions are introduced, that remove spatially varying blurs in dynamic scenes unlike conventional methods built on the assumption that the captured scenes are static. Three kinds of dynamic scene deblurring methods are proposed to achieve this goal, and they are based on: (1) segmentation, (2) sharp exemplar, (3) kernel-parametrization. The proposed approaches are introduced from segment-wise to pixel-wise approaches, and pixel-wise varying general blurs are handled in the end. First, the segmentation-based deblurring method estimates the latent image, multiple different kernels, and associated segments jointly. With the aid of the joint approach, segmentation-based method could achieve accurate blur kernel within a segment, remove segment-wise varying blurs, and reduce artifacts at the motion boundaries which are common in conventional approaches. Next, an \textit{exemplar}-based deblurring method is proposed, which utilizes a sharp exemplar to estimate highly accurate blur kernel and overcomes the limitations of the segmentation-based method that cannot handle small or texture-less segments. Lastly, the deblurring method using kernel-parametrization approximates the locally varying kernel as linear using motion flows. Thus the proposed method based on kernel-parametrization is generally applicable to remove pixel-wise varying blurs, and estimates the latent image and motion flow at the same time. With the proposed methods, significantly improved deblurring qualities are achieved, and intensive experimental evaluations demonstrate the superiority of the proposed methods in dynamic scene deblurring, in which state-of-the-art methods fail to deblur.Chapter 1 Introduction 1 Chapter 2 Image Deblurring with Segmentation 7 2.1 Introduction and Related Work 7 2.2 Segmentation-based Dynamic Scene Deblurring Model 11 2.2.1 Adaptive blur model selection 13 2.2.2 Regularization 14 2.3 Optimization 17 2.3.1 Sharp image restoration 18 2.3.2 Weight estimation 19 2.3.3 Kernel estimation 23 2.3.4 Overall procedure 25 2.4 Experiments 25 2.5 Summary 27 Chapter 3 Image Deblurring with Exemplar 33 3.1 Introduction and Related Work 35 3.2 Method Overview 37 3.3 Stage I: Exemplar Acquisition 38 3.3.1 Sharp image acquisition and preprocessing 38 3.3.2 Exemplar from blur-aware optical flow estimation 40 3.4 Stage II: Exemplar-based Deblurring 42 3.4.1 Exemplar-based latent image restoration 43 3.4.2 Motion-aware segmentation 44 3.4.3 Robust kernel estimation 45 3.4.4 Unified energy model and optimization 47 3.5 Stage III: Post-processing and Refinement 47 3.6 Experiments 49 3.7 Summary 53 Chapter 4 Image Deblurring with Kernel-Parametrization 57 4.1 Introduction and Related Work 59 4.2 Preliminary 60 4.3 Proposed Method 62 4.3.1 Image-statistics-guided motion 62 4.3.2 Adaptive variational deblurring model 64 4.4 Optimization 69 4.4.1 Motion estimation 70 4.4.2 Latent image restoration 72 4.4.3 Kernel re-initialization 73 4.5 Experiments 75 4.6 Summary 80 Chapter 5 Video Deblurring with Kernel-Parametrization 87 5.1 Introduction and Related Work 87 5.2 Generalized Video Deblurring 93 5.2.1 A new data model based on kernel-parametrization 94 5.2.2 A new optical flow constraint and temporal regularization 104 5.2.3 Spatial regularization 105 5.3 Optimization Framework 107 5.3.1 Sharp video restoration 108 5.3.2 Optical flows estimation 109 5.3.3 Defocus blur map estimation 110 5.4 Implementation Details 111 5.4.1 Initialization and duty cycle estimation 112 5.4.2 Occlusion detection and refinement 113 5.5 Motion Blur Dataset 114 5.5.1 Dataset generation 114 5.6 Experiments 116 5.7 Summary 120 Chapter 6 Conclusion 127 Bibliography 131 국문 초록 141Docto

    Data-Driven Image Restoration

    Get PDF
    Every day many images are taken by digital cameras, and people are demanding visually accurate and pleasing result. Noise and blur degrade images captured by modern cameras, and high-level vision tasks (such as segmentation, recognition, and tracking) require high-quality images. Therefore, image restoration specifically, image deblurring and image denoising is a critical preprocessing step. A fundamental problem in image deblurring is to recover reliably distinct spatial frequencies that have been suppressed by the blur kernel. Existing image deblurring techniques often rely on generic image priors that only help recover part of the frequency spectrum, such as the frequencies near the high-end. To this end, we pose the following specific questions: (i) Does class-specific information offer an advantage over existing generic priors for image quality restoration? (ii) If a class-specific prior exists, how should it be encoded into a deblurring framework to recover attenuated image frequencies? Throughout this work, we devise a class-specific prior based on the band-pass filter responses and incorporate it into a deblurring strategy. Specifically, we show that the subspace of band-pass filtered images and their intensity distributions serve as useful priors for recovering image frequencies. Next, we present a novel image denoising algorithm that uses external, category specific image database. In contrast to existing noisy image restoration algorithms, our method selects clean image “support patches” similar to the noisy patch from an external database. We employ a content adaptive distribution model for each patch where we derive the parameters of the distribution from the support patches. Our objective function composed of a Gaussian fidelity term that imposes category specific information, and a low-rank term that encourages the similarity between the noisy and the support patches in a robust manner. Finally, we propose to learn a fully-convolutional network model that consists of a Chain of Identity Mapping Modules (CIMM) for image denoising. The CIMM structure possesses two distinctive features that are important for the noise removal task. Firstly, each residual unit employs identity mappings as the skip connections and receives pre-activated input to preserve the gradient magnitude propagated in both the forward and backward directions. Secondly, by utilizing dilated kernels for the convolution layers in the residual branch, each neuron in the last convolution layer of each module can observe the full receptive field of the first layer

    Competent Program Evolution, Doctoral Dissertation, December 2006

    Get PDF
    Heuristic optimization methods are adaptive when they sample problem solutions based on knowledge of the search space gathered from past sampling. Recently, competent evolutionary optimization methods have been developed that adapt via probabilistic modeling of the search space. However, their effectiveness requires the existence of a compact problem decomposition in terms of prespecified solution parameters. How can we use these techniques to effectively and reliably solve program learning problems, given that program spaces will rarely have compact decompositions? One method is to manually build a problem-specific representation that is more tractable than the general space. But can this process be automated? My thesis is that the properties of programs and program spaces can be leveraged as inductive bias to reduce the burden of manual representation-building, leading to competent program evolution. The central contributions of this dissertation are a synthesis of the requirements for competent program evolution, and the design of a procedure, meta-optimizing semantic evolutionary search (MOSES), that meets these requirements. In support of my thesis, experimental results are provided to analyze and verify the effectiveness of MOSES, demonstrating scalability and real-world applicability

    Physics of Brain Folding

    Get PDF
    The human brain is characterized by its folded structure, being the most folded brain among all primates. The process by which these folds emerge, called gyrogenesis, is still not fully understood. The brain is divided into an outer region, called gray matter, which grows at a faster rate than the inner region, called white matter. It is hypothesized that this imbalance in growth -- and the mechanical stress thereby generated -- drives gyrogenesis, which is the focus of this thesis. Finite element simulations are performed where the brain is modeled as a non-linear elastic and growth is introduced via a multiplicative decomposition. A small section of the brain, represented by a rectangular slab, is analyzed. This slab is divided into a thin hard upper layer mimicking the gray matter, and a soft substrate, mimicking the white matter. The top layer is then grown tangentially, while the underlying substrate does not grow. JuFold, the software developed to perform these simulations, is introduced, and its design is explained. An overview of its capabilities, and examples of simulation possibilities are shown. Additionally, one patent-leading application of JuFold in the realm of material science showcases its flexibility. Simulations are first performed by minimizing the elastic energy, corresponding to the slow growth regime. Systems with homogeneous cortices are studied, where growth initially compresses, and then buckles the cortical region, which generates wavy patterns with wavelength proportional to cortical thickness. After buckling, the sulcal regions (i.e. the valleys of the system) are thinner than the gyral regions (i.e. the hills). Introducing thickness inhomogeneities along the cortex lead to new and localized configurations, which are strongly dependent not only on the thickness of the region, but also on its gradient. Furthermore, cortical landmarks appear sequentially, consistent with the hierarchical folding observed during gestation. A linear stability theory is developed based on thin plate theory and is compared with homogeneous and inhomogeneous systems. Next, we turn to more physically stringent dynamic simulations. For slow growth rate and time-constant thickness, the results obtained through energy minimization are recovered, justifying previous literature. For faster growth, an overshoot of the wavenumber and a broad wavenumber spectrum are observed immediately after buckling. After a relaxation period, where the average wavenumber decreases and the wavenumber spectrum narrows, it is observed that the system stabilizes into a finite spectrum, whose average wavelength is smaller than that expected from energy minimization arguments. Cortical inhomogeneities are further explored in this new regime. Systems with inhomogeneous cortical thickness are revisited, with effects similar to the homogeneous cortex (i.e., results are consistent between the slow growth and the quasistatic regimes, and overshoot is observed in the fast growth regimes). Systems with inhomogeneous cortical growth are simulated, with this new type of inhomogeneity inducing fissuration and localized folding. The interplay between these two inhomogeneities is studied, and their interaction is shown to be nonlinear, with each inhomogeneity type inhibiting the folding effects of the other. That is, the folding profile of each individual region emerges as a result of the local inhomogeneity, and the system does not display an intermediate behavior. Finally, these results are compared with an extended linear stability theory. Taken together, our simulations and analytical theory expose new phenomena predicted by an incremented buckling hypothesis for folding and show a series of new avenues which could give rise to the important cortical features in the mammalian brain, especially those related to higher-order folding

    Optimizing Information Gathering for Environmental Monitoring Applications

    Get PDF
    The goal of environmental monitoring is to collect information from the environment and to generate an accurate model for a specific phenomena of interest. We can distinguish environmental monitoring applications into two macro areas that have different strategies for acquiring data from the environment. On one hand the use of fixed sensors deployed in the environment allows a constant monitoring and a steady flow of information coming from a predetermined set of locations in space. On the other hand the use of mobile platforms allows to adaptively and rapidly choose the sensing locations based on needs. For some applications (e.g. water monitoring) this can significantly reduce costs associated with monitoring compared with classical analysis made by human operators. However, both cases share a common problem to be solved. The data collection process must consider limited resources and the key problem is to choose where to perform observations (measurements) in order to most effectively acquire information from the environment and decrease the uncertainty about the analyzed phenomena. We can generalize this concept under the name of information gathering. In general, maximizing the information that we can obtain from the environment is an NP-hard problem. Hence, optimizing the selection of the sampling locations is crucial in this context. For example, in case of mobile sensors the problem of reducing uncertainty about a physical process requires to compute sensing trajectories constrained by the limited resources available, such as, the battery lifetime of the platform or the computation power available on board. This problem is usually referred to as Informative Path Planning (IPP). In the other case, observation with a network of fixed sensors requires to decide beforehand the specific locations where the sensors has to be deployed. Usually the process of selecting a limited set of informative locations is performed by solving a combinatorial optimization problem that model the information gathering process. This thesis focuses on the above mentioned scenario. Specifically, we investigate diverse problems and propose innovative algorithms and heuristics related to the optimization of information gathering techniques for environmental monitoring applications, both in case of deployment of mobile and fixed sensors. Moreover, we also investigate the possibility of using a quantum computation approach in the context of information gathering optimization

    Decentralized algorithm of dynamic task allocation for a swarm of homogeneous robots

    Get PDF
    The current trends in the robotics field have led to the development of large-scale swarm robot systems, which are deployed for complex missions. The robots in these systems must communicate and interact with each other and with their environment for complex task processing. A major problem for this trend is the poor task planning mechanism, which includes both task decomposition and task allocation. Task allocation means to distribute and schedule a set of tasks to be accomplished by a group of robots to minimize the cost while satisfying operational constraints. Task allocation mechanism must be run by each robot, which integrates the swarm whenever it senses a change in the environment to make sure the robot is assigned to the most appropriate task, if not, the robot should reassign itself to its nearest task. The main contribution in this thesis is to maximize the overall efficiency of the system by minimizing the total time needed to accomplish the dynamic task allocation problem. The near-optimal allocation schemes are found using a novel hybrid decentralized algorithm for a dynamic task allocation in a swarm of homogeneous robots, where the number of the tasks is more than the robots present in the system. This hybrid approach is based on both the Simulated Annealing (SA) optimization technique combined with the Discrete Particle Swarm Optimization (DPSO) technique. Also, another major contribution in this thesis is the formulation of the dynamic task allocation equations for the homogeneous swarm robotics using integer linear programming and the cost function and constraints are introduced for the given problem. Then, the DPSO and SA algorithms are developed to accomplish the task in a minimal time. Simulation is implemented using only two test cases via MATLAB. Simulation results show that PSO exhibits a smaller and more stable convergence characteristics and SA technique owns a better quality solution. Then, after developing the hybrid algorithm, which combines SA with PSO, simulation instances are extended to include fifteen more test cases with different swarm dimensions to ensure the robustness and scalability of the proposed algorithm over the traditional PSO and SA optimization techniques. Based on the simulation results, the hybrid DPSO/SA approach proves to have a higher efficiency in both small and large swarm sizes than the other traditional algorithms such as Particle Swarm Optimization technique and Simulated Annealing technique. The simulation results also demonstrate that the proposed approach can dislodge a state from a local minimum and guide it to the global minimum. Thus, the contributions of the proposed hybrid DPSO/SA algorithm involve possessing both the pros of high quality solution in SA and the fast convergence time capability in PSO. Also, a parameters\u27 selection process for the hybrid algorithm is proposed as a further contribution in an attempt to enhance the algorithm efficiency because the heuristic optimization techniques are very sensitive to any parameter changes. In addition, Verification is performed to ensure the effectiveness of the proposed algorithm by comparing it with results of an exact solver in terms of computational time, number of iterations and quality of solution. The exact solver that is used in this research is the Hungarian algorithm. This comparison shows that the proposed algorithm gives a superior performance in almost all swarm sizes with both stable and small execution time. However, it also shows that the proposed hybrid algorithm\u27s cost values which is the distance traveled by the robots to perform the tasks are larger than the cost values of the Hungarian algorithm but the execution time of the hybrid algorithm is much better. Finally, one last contribution in this thesis is that the proposed algorithm is implemented and extensively tested in a real experiment using a swarm of 4 robots. The robots that are used in the real experiment called Elisa-III robots

    Proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering

    Get PDF
    These are the online proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering (WITMSE), which was held in the Trippenhuis, Amsterdam, in August 2012

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach

    Utilização de modelos de optimização para obter soluções em técnicas de classificação e agrupamento

    Get PDF
    Orientador: Washington Alves de OliveiraDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Ciências AplicadasResumo: Esta dissertação tem como objetivo estudar algumas abordagens de manipulação de bancos de dados em larga escala com o objetivo de extrair informações representativas a partir do uso de programação matemática. Os padrões estruturais dos dados fornecem informações que podem ser usadas para classificá-los e agrupá-los por meio da solução ótima de problemas específicos de otimização. As técnicas utilizadas podem ser confrontadas com abordagens de aprendizado de máquina para fornecer novas possibilidades numéricas de resolução. Testes computacionais conduzidos em dois estudos de caso (dados oriundos de experimentos práticos) validam esta pesquisa. As análises são conduzidas sobre um conjunto de dados relacionados com a identificação de tumores de câncer de mama, com diagnóstico maligno ou benigno, e um banco de dados de animais bovinos que fornecem características físicas e de raça de cada animal, porém sem um padrão previamente conhecido. Uma classificação binária com base em um modelo matemático de programação de metas é usado para o primeiro estudo de caso. No estudo conduzido sobre as características dos animais bovinos, o interesse é identificar padrões entre os diversos animais ao agrupá-los por meio da análise das soluções de um modelo de otimização linear com variáveis inteiras. Os resultados computacionais são estudados a partir de um conjunto de procedimentos estatístico descritivo para validar o estudo propostoAbstract: This dissertation aims to study some techniques for handling large scale datasets to extract representative information from the use of mathematical programming. The structural patterns of data provide pieces of information that can be used to classify and cluster them through the optimal solution of specific optimization problems. The techniques used could be confronted with machine learning approaches to supply new numerical possibilities of resolution. Computational tests conducted on two case studies with real data (practical experiments) validate this research. The analyzes are done for the well-known database on the identification of breast cancer tumors, which either have a malignant or have a benign diagnosis, and also for a bovine animal database containing physical and breed characteristics of each animal but with unknown patterns. A binary classification based on a goal programming formulation is suggested for the first case study. In the study conducted on the characteristics of bovine animals, the interest is to identify patterns among the different animals by grouping them from the solutions of an integer linear optimization model. The computational results are studied from a set of descriptive statistical procedures to validate this researchMestradoPesquisa Operacional e Gestão de ProcessosMestra em Engenharia de Produção e de Manufatura019/2012Funcam
    corecore