39 research outputs found

    Convex Hulls, Triangulations, and Voronoi Diagrams of Planar Point Sets on the Congested Clique

    Full text link
    We consider geometric problems on planar n2n^2-point sets in the congested clique model. Initially, each node in the nn-clique network holds a batch of nn distinct points in the Euclidean plane given by O(logn)O(\log n)-bit coordinates. In each round, each node can send a distinct O(logn)O(\log n)-bit message to each other node in the clique and perform unlimited local computations. We show that the convex hull of the input n2n^2-point set can be constructed in O(min{h,logn})O(\min\{ h,\log n\}) rounds, where hh is the size of the hull, on the congested clique. We also show that a triangulation of the input n2n^2-point set can be constructed in O(log2n)O(\log^2n) rounds on the congested clique. Finally, we demonstrate that the Voronoi diagram of n2n^2 points with O(logn)O(\log n)-bit coordinates drawn uniformly at random from a unit square can be computed within the square with high probability in O(1)O(1) rounds on the congested clique.Comment: 17 pages, 7 figure

    Study of optimal shapes for lightweight material design

    Get PDF
    The present end of studies project tries to assimilate the connections between the geometrical and physical domain in order to design optimal microstructures with the desired properties in 2D. The link between both domains is established by means of the crystallographic point groups, which relate the topology of the minimum volume unit and its symmetries with the elastic tensor. Therefore, there are two pre-processing variables that play a determining role on the way to the optimal topology: the shape of the mesh and the symmetries of the material distribution inside it. For this reason, in the present study a shape generator and unit cell meshing algorithm is implemented and a topological optimizer code is used to distribute geometrically the material inside the unit cells in order to obtain the desired elastic tensor (resolution of the inverse problem) while minimizing the amount of material used. In order to obtain the desired material properties, the capacity of the topological optimizer to generate the necessary geometric symmetries in the microstructure that guarantee the physical symmetries required by the design target tensor is evaluated. Therefore, during the course of the study there will be a theoretical review of topological optimization, crystallography and geometric and tensor symmetries, the development of the structure and operation of the mesh generator code and a practical study of the optimizer’s capacity to obtain the tensors designed with the selected lattice topologies. At the same time, the essential organizational concepts and main differences between the programming used in the meshing algorithm, that is object-oriented programming, and modular or functional programming, are also reviewed

    Stabilization and Imaging of Cohesionless Soil Specimens

    Get PDF
    abstract: This dissertation describes development of a procedure for obtaining high quality, optical grade sand coupons from frozen sand specimens of Ottawa 20/30 sand for image processing and analysis to quantify soil structure along with a methodology for quantifying the microstructure from the images. A technique for thawing and stabilizing frozen core samples was developed using optical grade Buehler® Epo-Tek® epoxy resin, a modified triaxial cell, a vacuum/reservoir chamber, a desiccator, and a moisture gauge. The uniform epoxy resin impregnation required proper drying of the soil specimen, application of appropriate confining pressure and vacuum levels, and epoxy mixing, de-airing and curing. The resulting stabilized sand specimen was sectioned into 10 mm thick coupons that were planed, ground, and polished with progressively finer diamond abrasive grit levels using the modified Allied HTP Inc. polishing method so that the soil structure could be accurately quantified using images obtained with the use of an optical microscopy technique. Illumination via Bright Field Microscopy was used to capture the images for subsequent image processing and sand microstructure analysis. The quality of resulting images and the validity of the subsequent image morphology analysis hinged largely on employment of a polishing and grinding technique that resulted in a flat, scratch free, reflective coupon surface characterized by minimal microstructure relief and good contrast between the sand particles and the surrounding epoxy resin. Subsequent image processing involved conversion of the color images first to gray scale images and then to binary images with the use of contrast and image adjustments, removal of noise and image artifacts, image filtering, and image segmentation. Mathematical morphology algorithms were used on the resulting binary images to further enhance image quality. The binary images were then used to calculate soil structure parameters that included particle roundness and sphericity, particle orientation variability represented by rose diagrams, statistics on the local void ratio variability as a function of the sample size, and the local void ratio distribution histograms using Oda's method and Voronoi tessellation method, including the skewness, kurtosis, and entropy of a gamma cumulative probability distribution fit to the local void ratio distribution.Dissertation/ThesisM.S. Civil Engineering 201

    Subknots in ideal knots, random knots, and knotted proteins.

    Get PDF
    We introduce disk matrices which encode the knotting of all subchains in circular knot configurations. The disk matrices allow us to dissect circular knots into their subknots, i.e. knot types formed by subchains of the global knot. The identification of subknots is based on the study of linear chains in which a knot type is associated to the chain by means of a spatially robust closure protocol. We characterize the sets of observed subknot types in global knots taking energy-minimized shapes such as KnotPlot configurations and ideal geometric configurations. We compare the sets of observed subknots to knot types obtained by changing crossings in the classical prime knot diagrams. Building upon this analysis, we study the sets of subknots in random configurations of corresponding knot types. In many of the knot types we analyzed, the sets of subknots from the ideal geometric configurations are found in each of the hundreds of random configurations of the same global knot type. We also compare the sets of subknots observed in open protein knots with the subknots observed in the ideal configurations of the corresponding knot type. This comparison enables us to explain the specific dispositions of subknots in the analyzed protein knots

    Quantization and clustering on Riemannian manifolds with an application to air traffic analysis

    Get PDF
    International audienceThe goal of quantization is to find the best approximation of a probability distribution by a discrete measure with finite support. When dealing with empirical distributions, this boils down to finding the best summary of the data by a smaller number of points, and automatically yields a k-means-type clustering. In this paper, we introduce Competitive Learning Riemannian Quantization (CLRQ), an online quantization algorithm that applies when the data does not belong to a vector space, but rather a Riemannian manifold. It can be seen as a density approximation procedure as well as a clustering method. Compared to many clustering algorihtms, it requires few distance computations, which is particularly computationally advantageous in the manifold setting. We prove its convergence and show simulated examples on the sphere and the hyperbolic plane. We also provide an application to real data by using CLRQ to create summaries of images of covariance matrices estimated from air traffic images. These summaries are representative of the air traffic complexity and yield clusterings of the airspaces into zones that are homogeneous with respect to that criterion. They can then be compared using discrete optimal transport and be further used as inputs of a machine learning algorithm or as indexes in a traffic database

    Energy Efficient Algorithms in Low-Energy Wireless Sensor Networks

    Full text link
    Wireless sensor networks (WSNs) consist of small autonomous processors spatially distributed, typically with the goal of gathering physical data about the environment such as temperature, air pressure, and sound. WSNs have a wide range of applications including military use, health care monitoring, and environmental sensing. Because sensors are typically battery powered, algorithms for sensor network models should not only seek to minimize runtime but also energy utilization. Specifically, to maximize network lifetime, algorithms must minimize the energy usage of the sensors that use the most energy in the network. In extremely dense networks it may be inefficient for sensors to communicate with all neighboring sensors on a consistent basis, especially in mobile wireless sensor networks (MWSNs) where the topology of the network is constantly changing. Sensors conserve energy by going into a low-energy sleep state, and in our algorithms sensors will be asleep for the vast majority of the total runtime. Algorithms under these conditions face additional challenges because of the increased difficulty of coordinating between sensors. Because of the spatial nature of sensor networks, geometry problems are often of particular interest. For example, to detect outliers, data is often compared with the nearest neighboring sensors. In this dissertation we provide algorithmic techniques designed for divide-and-conquer solutions to computational geometry problems. We provide a technique for coordinating divide-and-conquer algorithms in a single-hop setting called breadth first recursion. We use this technique to sort data and to find the convex hull. Although most WSNs are multi-hop networks, locally very dense, expansive networks resemble single-hop networks. Thus we use algorithms for single-hop networks as a building blocks for multi-hop algorithms with α-consolidation algorithms. We then provide α-consolidation algorithms for all points k-nearest neighbors, the coverage boundary, and the Voronoi diagram. We also analyze the WSN problem of propagating data to a high-energy base station. Clustering approaches, such as low-energy adaptive clustering hierarchy (LEACH) and its multi-hop variant (MR-LEACH), are extremely popular for data propagation. The energy balanced protocol (EBP) is a clustering approach like MRLEACH where clusters pass data towards the base station but also, with some probability, send data long distances directly to the base station. We analytically and empirically show that EBP is close to optimal while approaches that do not use long hops like MR-LEACH are only close to optimal if sending messages long distances is prohibitively expensive.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153370/1/timlewis_1.pd

    Adaptive global optimization algorithms

    Get PDF
    Global optimization is concerned with finding the minimum value of a function where many local minima may exist. The development of a global optimization algorithm may involve using information about the target function (e.g., differentiability) and functions based on statistical models to better the worst case time complexity and expected error of similar deterministic algorithms. Recent algorithms are investigated, new ones proposed and their performance is analyzed. Minimum, maximum and average case error bounds for the algorithms presented are derived. Software architecture implemented with MATLAB and Java is presented and experimental results for the algorithms are displayed. The graphical capabilities and function-rich MATLAB environment are combined with the object oriented features of Java, hosted on the computer system described in this paper, to provide a fast, powerful test environment to provide experimental results. In order to do this, matlabcontrol, a third party set of procedures that allows a Java program to call MATLAB functions to access a function such as voronoi() or to provide graphical results, is used. Additionally, the Java implementation can be called from, and return values to, the MATLAB environment. The data can then be used as input to MATLAB\u27s graphing or other functions. The software test environment provides algorithm performance information such as whether more iterations or replications of a proposed algorithm would be expected to provide a better result for an algorithm. It is anticipated that the functionality provided by the framework would be used for initial development and analysis and subsequently removed and replaced with optimized (in the computer efficiency sense) functions for deployment

    Iris Indexing and Ear Classification

    Get PDF
    To identify an individual using a biometric system, the input biometric data has to be typically compared against that of each and every identity in the existing database during the matching stage. The response time of the system increases with the increase in number of individuals (i.e., database size), which is not acceptable in real time monitoring or when working on large scale data. This thesis addresses the problem of reducing the number of database candidates to be considered during matching in the context of iris and ear recognition. In the case of iris, an indexing mechanism based on Burrows Wheeler Transform (BWT) is proposed. Experiments on the CASIA version 3 iris database show a significant reduction in both search time and search space, suggesting the potential of this scheme for indexing iris databases. The ear classification scheme proposed in the thesis is based on parameterizing the shape of the ear and assigning it to one of four classes: round, rectangle, oval and triangle. Experiments on the MAGNA database suggest the potential of this scheme for classifying ear databases
    corecore