280 research outputs found

    Self-improving Algorithms for Coordinate-wise Maxima

    Full text link
    Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the \emph{self-improving setting}. We have nn (unknown) independent distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset (p1,p2,...,pn)(p_1, p_2, ..., p_n) is generated by taking an independent sample pip_i from each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A self-improving algorithm repeatedly gets input sets from the distribution \cD (which is \emph{a priori} unknown) and tries to optimize its running time for \cD. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution \cD. Let \OPT_\cD denote the expected depth of an \emph{optimal} linear comparison tree computing the maxima for distribution \cD. Our algorithm eventually has an expected running time of O(\text{OPT}_\cD + n), even though it did not know \cD to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2 figures

    Self-improving Algorithms for Convex Hulls

    Full text link

    Fast Computation of Output-Sensitive Maxima in a Word RAM

    Full text link
    In this paper, we study the problem of computing the maxima of a set of n points in three dimensions with integer coordinates and show that in a word RAM, the maxima can be found in O n log logn/h n deterministic time in which h is the output size. For h = n1−α this is O(n log(1/α)). This improves the previous O(n log log h) time algorithm and can be considered surprising since it gives a linear time algorithm when α> 0 is a constant, which is faster than the current best deterministic and randomized integer sorting algorithms. We observe that improving this running time is most likely difficult since it requires breaking a number of important barriers, even if randomization is allowed. Additionally, we show that the same deterministic running time could be achieved for performing n point location queries in an arrangement of size h. Finally, our maxima result can be extended to higher dimensions by paying a logn/h n factor penalty per dimension. This has further interesting consequences for example it preserves the linear running time when h ≀ n1−α, for a constant α> 0, and thus it shows that for a variety of input distributions the maxima can be computed in linear expected time without knowing the distribution.

    Automated Morphology Analysis of Nanoparticles

    Get PDF
    The functional properties of nanoparticles highly depend on the surface morphology of the particles, so precise measurements of a particle's morphology enable reliable characterizing of the nanoparticle's properties. Obtaining the measurements requires image analysis of electron microscopic pictures of nanoparticles. Today's labor-intensive image analysis of electron micrographs of nanoparticles is a significant bottleneck for efficient material characterization. The objective of this dissertation is to develop automated morphology analysis methods. Morphology analysis is comprised of three tasks: separate individual particles from an agglomerate of overlapping nano-objects (image segmentation); infer the particle's missing contours (shape inference); and ultimately, classify the particles by shape based on their complete contours (shape classification). Two approaches are proposed in this dissertation: the divide-and-conquer approach and the convex shape analysis approach. The divide-and-conquer approach solves each task separately, taking less than one minute to complete the required analysis, even for the largest-sized micrograph. However, its separating capability of particle overlaps is limited, meaning that it is able to split only touching particles. The convex shape analysis approach solves shape inference and classification simultaneously for better accuracy, but it requires more computation time, ten minutes for the biggest-sized electron micrograph. However, with a little sacrifice of time efficiency, the second approach achieves far superior separation than the divide-and-conquer approach, and it handles the chain-linked structure of particle overlaps well. The capabilities of the two proposed methods cannot be substituted by generic image processing and bio-imaging methods. This is due to the unique features that the electron microscopic pictures of nanoparticles have, including special particle overlap structures, and large number of particles to be processed. The application of the proposed methods to real electron microscopic pictures showed that the two proposed methods were more capable of extracting the morphology information than the state-of-the-art methods. When nanoparticles do not have many overlaps, the divide-and-conquer approach performed adequately. When nanoparticles have many overlaps, forming chain-linked clusters, the convex shape analysis approach performed much better than the state-of-the-art alternatives in bio-imaging. The author believes that the capabilities of the proposed methods expedite the morphology characterization process of nanoparticles. The author further conjectures that the technical generality of the proposed methods could even be a competent alternative to the current methods analyzing general overlapping convex-shaped objects other than nanoparticles

    Convex hulls of random walks

    Get PDF
    We study the convex hulls of random walks establishing both law of large numbers and weak convergence statements for the perimeter length, diameter and shape of the hull. It should come as no surprise that the case where the random walk has drift, and the zero-drift case behave differently. We make use of several different methods to gain a better insight into each case. Classical results such as Cauchy’s surface area formula, the law of large numbers and the central limit theorem give some preliminary law of large number results. Considering the convergence of the random walk and then using the continuous mapping theorem leads to intuitive results in the case with drift where, under the appropriate scaling, non-zero, deterministic limits exist. In the zero-drift case the random limiting process, Brownian motion, provides insight into the behaviour of such a walk. We add to the literature in this area by establishing tighter bounds on the expected diameter of planar Brownian motion. The Brownian motion process is also useful for proving that the convex hull of the zero-drift random walk has no limiting shape. In the case with drift, a martingale difference method was used by Wade and Xu to prove a central limit theorem for the perimeter length. We use this framework to establish similar results for the diameter of the convex hull. Time-space processes give degenerate results here, so we use some geometric properties to further what is known about the variance of the functionals in this case and to prove a weak convergence statement for the diameter. During the study of the geometrical properties, we show that, only finitely often is there a single face in the convex minorant (or concave majorant) of such a walk

    Global Optimisation for Energy System

    Get PDF
    The goal of global optimisation is to find globally optimal solutions, avoiding local optima and other stationary points. The aim of this thesis is to provide more efficient global optimisation tools for energy systems planning and operation. Due to the ongoing increasing of complexity and decentralisation of power systems, the use of advanced mathematical techniques that produce reliable solutions becomes necessary. The task of developing such methods is complicated by the fact that most energy-related problems are nonconvex due to the nonlinear Alternating Current Power Flow equations and the existence of discrete elements. In some cases, the computational challenges arising from the presence of non-convexities can be tackled by relaxing the definition of convexity and identifying classes of problems that can be solved to global optimality by polynomial time algorithms. One such property is known as invexity and is defined by every stationary point of a problem being a global optimum. This thesis investigates how the relation between the objective function and the structure of the feasible set is connected to invexity and presents necessary conditions for invexity in the general case and necessary and sufficient conditions for problems with two degrees of freedom. However, nonconvex problems often do not possess any provable convenient properties, and specialised methods are necessary for providing global optimality guarantees. A widely used technique is solving convex relaxations in order to find a bound on the optimal solution. Semidefinite Programming relaxations can provide good quality bounds, but they suffer from a lack of scalability. We tackle this issue by proposing an algorithm that combines decomposition and linearisation approaches. In addition to continuous non-convexities, many problems in Energy Systems model discrete decisions and are expressed as mixed-integer nonlinear programs (MINLPs). The formulation of a MINLP is of significant importance since it affects the quality of dual bounds. In this thesis we investigate algebraic characterisations of on/off constraints and develop a strengthened version of the Quadratic Convex relaxation of the Optimal Transmission Switching problem. All presented methods were implemented in mathematical modelling and optimisation frameworks PowerTools and Gravity

    An attention model and its application in man-made scene interpretation

    No full text
    The ultimate aim of research into computer vision is designing a system which interprets its surrounding environment in a similar way the human can do effortlessly. However, the state of technology is far from achieving such a goal. In this thesis different components of a computer vision system that are designed for the task of interpreting man-made scenes, in particular images of buildings, are described. The flow of information in the proposed system is bottom-up i.e., the image is first segmented into its meaningful components and subsequently the regions are labelled using a contextual classifier. Starting from simple observations concerning the human vision system and the gestalt laws of human perception, like the law of “good (simple) shape” and “perceptual grouping”, a blob detector is developed, that identifies components in a 2D image. These components are convex regions of interest, with interest being defined as significant gradient magnitude content. An eye tracking experiment is conducted, which shows that the regions identified by the blob detector, correlate significantly with the regions which drive the attention of viewers. Having identified these blobs, it is postulated that a blob represents an object, linguistically identified with its own semantic name. In other words, a blob may contain a window a door or a chimney in a building. These regions are used to identify and segment higher order structures in a building, like facade, window array and also environmental regions like sky and ground. Because of inconsistency in the unary features of buildings, a contextual learning algorithm is used to classify the segmented regions. A model which learns spatial and topological relationships between different objects from a set of hand-labelled data, is used. This model utilises this information in a MRF to achieve consistent labellings of new scenes

    Colour morphological sieves for scale-space image processing

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Unifying a Geometric Framework of Evolutionary Algorithms and Elementary Landscapes Theory

    Get PDF
    Evolutionary algorithms (EAs) are randomised general-purpose strategies, inspired by natural evolution, often used for finding (near) optimal solutions to problems in combinatorial optimisation. Over the last 50 years, many theoretical approaches in evolutionary computation have been developed to analyse the performance of EAs, design EAs or measure problem difficulty via fitness landscape analysis. An open challenge is to formally explain why a general class of EAs perform better, or worse, than others on a class of combinatorial problems across representations. However, the lack of a general unified theory of EAs and fitness landscapes, across problems and representations, makes it harder to characterise pairs of general classes of EAs and combinatorial problems where good performance can be guaranteed provably. This thesis explores a unification between a geometric framework of EAs and elementary landscapes theory, not tied to a specific representation nor problem, with complementary strengths in the analysis of population-based EAs and combinatorial landscapes. This unification organises around three essential aspects: search space structure induced by crossovers, search behaviour of population-based EAs and structure of fitness landscapes. First, this thesis builds a crossover classification to systematically compare crossovers in the geometric framework and elementary landscapes theory, revealing a shared general subclass of crossovers: geometric recombination P-structures, which covers well-known crossovers. The crossover classification is then extended to a general framework for axiomatically analysing the population behaviour induced by crossover classes on associated EAs. This shows the shared general class of all EAs using geometric recombination P-structures, but no mutation, always do the same abstract form of convex evolutionary search. Finally, this thesis characterises a class of globally convex combinatorial landscapes shared by the geometric framework and elementary landscapes theory: abstract convex elementary landscapes. It is formally explained why geometric recombination P-structure EAs expectedly can outperform random search on abstract convex elementary landscapes related to low-order graph Laplacian eigenvalues. Altogether, this thesis paves a way towards a general unified theory of EAs and combinatorial fitness landscapes

    3D Reconstruction using Active Illumination

    Get PDF
    In this thesis we present a pipeline for 3D model acquisition. Generating 3D models of real-world objects is an important task in computer vision with many applications, such as in 3D design, archaeology, entertainment, and virtual or augmented reality. The contribution of this thesis is threefold: we propose a calibration procedure for the cameras, we describe an approach for capturing and processing photometric normals using gradient illuminations in the hardware set-up, and finally we present a multi-view photometric stereo 3D reconstruction method. In order to obtain accurate results using multi-view and photometric stereo reconstruction, the cameras are calibrated geometrically and photometrically. For acquiring data, a light stage is used. This is a hardware set-up that allows to control the illumination during acquisition. The procedure used to generate appropriate illuminations and to process the acquired data to obtain accurate photometric normals is described. The core of the pipeline is a multi-view photometric stereo reconstruction method. In this method, we first generate a sparse reconstruction using the acquired images and computed normals. In the second step, the information from the normal maps is used to obtain a dense reconstruction of an object’s surface. Finally, the reconstructed surface is filtered to remove artifacts introduced by the dense reconstruction step
    • 

    corecore