3,666 research outputs found

    A new proof of the graph removal lemma

    Get PDF
    Let H be a fixed graph with h vertices. The graph removal lemma states that every graph on n vertices with o(n^h) copies of H can be made H-free by removing o(n^2) edges. We give a new proof which avoids Szemer\'edi's regularity lemma and gives a better bound. This approach also works to give improved bounds for the directed and multicolored analogues of the graph removal lemma. This answers questions of Alon and Gowers.Comment: 17 page

    Multimapper: Data Density Sensitive Topological Visualization

    Full text link
    Mapper is an algorithm that summarizes the topological information contained in a dataset and provides an insightful visualization. It takes as input a point cloud which is possibly high-dimensional, a filter function on it and an open cover on the range of the function. It returns the nerve simplicial complex of the pullback of the cover. Mapper can be considered a discrete approximation of the topological construct called Reeb space, as analysed in the 11-dimensional case by [Carriere et al.,2018]. Despite its success in obtaining insights in various fields such as in [Kamruzzaman et al., 2016], Mapper is an ad hoc technique requiring lots of parameter tuning. There is also no measure to quantify goodness of the resulting visualization, which often deviates from the Reeb space in practice. In this paper, we introduce a new cover selection scheme for data that reduces the obscuration of topological information at both the computation and visualisation steps. To achieve this, we replace global scale selection of cover with a scale selection scheme sensitive to local density of data points. We also propose a method to detect some deviations in Mapper from Reeb space via computation of persistence features on the Mapper graph.Comment: Accepted at ICDM

    Experimental study of the compression behavior of mask image projection based on stereolithography manufactured parts

    Get PDF
    The article presents the results of a series of compression tests on samples manufactured by means of the mask image projection based on stereolithography additive manufacturing technique (MIP-SL). Recent studies demonstrate the orthotropic nature of the MIP-SL materials. A research is initiated by the authors to attempt to predict the degree of anisotropy from the manufacturing parameters of the MIP-SL parts. The article focuses mainly on the development of the experimental compression tests of the first stage of the research. Special attention is paid to the four methods used to obtain the stress-strain curve of the material: strain gages, 2D Digital Image Correlation, extensometer measurements and crosshead displacement measurements. The article shows the advantages and limitations of each method. Finally, the anisotropic behaviour is verified and a testing procedure is set to obtain the constitutive parameters of the MIP-SL tested materialsPeer ReviewedPostprint (published version

    Fragmentation with a Steady Source

    Full text link
    We investigate fragmentation processes with a steady input of fragments. We find that the size distribution approaches a stationary form which exhibits a power law divergence in the small size limit, P(x) ~ x^{-3}. This algebraic behavior is robust as it is independent of the details of the input as well as the spatial dimension. The full time dependent behavior is obtained analytically for arbitrary inputs, and is found to exhibit a universal scaling behavior.Comment: 4 page

    Growth and migration of solids in evolving protostellar disks I: Methods and Analytical tests

    Full text link
    This series of papers investigates the early stages of planet formation by modeling the evolution of the gas and solid content of protostellar disks from the early T Tauri phase until complete dispersal of the gas. In this first paper, I present a new set of simplified equations modeling the growth and migration of various species of grains in a gaseous protostellar disk evolving as a result of the combined effects of viscous accretion and photo-evaporation from the central star. Using the assumption that the grain size distribution function always maintains a power-law structure approximating the average outcome of the exact coagulation/shattering equation, the model focuses on the calculation of the growth rate of the largest grains only. The coupled evolution equations for the maximum grain size, the surface density of the gas and the surface density of solids are then presented and solved self-consistently using a standard 1+1 dimensional formalism. I show that the global evolution of solids is controlled by a leaky reservoir of small grains at large radii, and propose an empirically derived evolution equation for the total mass of solids, which can be used to estimate the total heavy element retention efficiency in the planet formation paradigm. Consistency with observation of the total mass of solids in the Minimum Solar Nebula augmented with the mass of the Oort cloud sets strong upper limit on the initial grain size distribution, as well as on the turbulent parameter \alphat. Detailed comparisons with SED observations are presented in a following paper.Comment: Submitted to ApJ. 23 pages and 13 figure

    Sketched MinDist

    Get PDF
    We consider sketch vectors of geometric objects JJ through the \mindist function vi(J)=infpJpqi v_i(J) = \inf_{p \in J} \|p-q_i\| for qiQq_i \in Q from a point set QQ. Collecting the vector of these sketch values induces a simple, effective, and powerful distance: the Euclidean distance between these sketched vectors. This paper shows how large this set QQ needs to be under a variety of shapes and scenarios. For hyperplanes we provide direct connection to the sensitivity sample framework, so relative error can be preserved in dd dimensions using Q=O(d/ε2)Q = O(d/\varepsilon^2). However, for other shapes, we show we need to enforce a minimum distance parameter ρ\rho, and a domain size LL. For d=2d=2 the sample size QQ then can be O~((L/ρ)1/ε2)\tilde{O}((L/\rho) \cdot 1/\varepsilon^2). For objects (e.g., trajectories) with at most kk pieces this can provide stronger \emph{for all} approximations with O~((L/ρ)k3/ε2)\tilde{O}((L/\rho)\cdot k^3 / \varepsilon^2) points. Moreover, with similar size bounds and restrictions, such trajectories can be reconstructed exactly using only these sketch vectors
    corecore