3,759 research outputs found

    Querying Geometric Figures Using a Controlled Language, Ontological Graphs and Dependency Lattices

    Full text link
    Dynamic geometry systems (DGS) have become basic tools in many areas of geometry as, for example, in education. Geometry Automated Theorem Provers (GATP) are an active area of research and are considered as being basic tools in future enhanced educational software as well as in a next generation of mechanized mathematics assistants. Recently emerged Web repositories of geometric knowledge, like TGTP and Intergeo, are an attempt to make the already vast data set of geometric knowledge widely available. Considering the large amount of geometric information already available, we face the need of a query mechanism for descriptions of geometric constructions. In this paper we discuss two approaches for describing geometric figures (declarative and procedural), and present algorithms for querying geometric figures in declaratively and procedurally described corpora, by using a DGS or a dedicated controlled natural language for queries.Comment: 14 pages, 5 figures, accepted at CICM 201

    Exact Computation of a Manifold Metric, via Lipschitz Embeddings and Shortest Paths on a Graph

    Full text link
    Data-sensitive metrics adapt distances locally based the density of data points with the goal of aligning distances and some notion of similarity. In this paper, we give the first exact algorithm for computing a data-sensitive metric called the nearest neighbor metric. In fact, we prove the surprising result that a previously published 33-approximation is an exact algorithm. The nearest neighbor metric can be viewed as a special case of a density-based distance used in machine learning, or it can be seen as an example of a manifold metric. Previous computational research on such metrics despaired of computing exact distances on account of the apparent difficulty of minimizing over all continuous paths between a pair of points. We leverage the exact computation of the nearest neighbor metric to compute sparse spanners and persistent homology. We also explore the behavior of the metric built from point sets drawn from an underlying distribution and consider the more general case of inputs that are finite collections of path-connected compact sets. The main results connect several classical theories such as the conformal change of Riemannian metrics, the theory of positive definite functions of Schoenberg, and screw function theory of Schoenberg and Von Neumann. We develop novel proof techniques based on the combination of screw functions and Lipschitz extensions that may be of independent interest.Comment: 15 page

    Locating Emergency Facilities Using the Weighted k-median Problem: A Graph-metaheuristic Approach

    Get PDF
    An efficient approach is presented for addressing the problem of finding the optimal facilities location in conjunction with the k-median method. First the region to be investigated is meshed and an incidence graph is constructed to obtain connectivity properties of meshes. Then shortest route trees (SRTs) are rooted from nodes of the generated graph. Subsequently, in order to divide the nodes of graph or the studied region into optimal k subregions, k-median approach is utilized. The weights of the nodes are considered as the risk factors such as population, seismic and topographic conditions for locating facilities in the high-risk zones to better facilitation. For finding the optimal facility locations, a recently developed meta-heuristic algorithm that is called Colliding Bodies Optimization (CBO) is used. The performance of the proposed method is investigated through different alternatives for minimizing the cost of the weighted k-median problem. As a case study, the Mazandaran province in Iran is considered and the above graph-metaheuristic approach is utilized for locating the facilities

    Relax, no need to round: integrality of clustering formulations

    Full text link
    We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: kk-means and kk-median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are kk clusters in Rm\mathbb{R}^m and data from each cluster consists of nn points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these kk clusters as the optimal integral solution? For the kk-median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation ϵ>0\epsilon > 0 between the balls. In other words, the pairwise center separation is Δ>2+ϵ\Delta > 2+\epsilon. Under the same distributional model, the kk-means LP relaxation fails to recover such clusters at separation as large as Δ=4\Delta = 4. Yet, if we enforce PSD constraints on the kk-means LP, we get exact cluster recovery at center separation Δ>22(1+1/m)\Delta > 2\sqrt2(1+\sqrt{1/m}). In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the kk-means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.Comment: 30 pages, ITCS 201

    Weighted Key Player Problem for Social Network Analysis

    Get PDF
    Social network analysis is a tool set whose uses range from measuring the impact of marketing campaigns to disrupting clandestine terrorist organizations. Social network analysis tools are primarily focused on the structure of relationships between actors in the network. However, characteristics of the actors, such as importance or status, are generally the output of the social network analysis rather than an input. Characteristics of actors can come from a number of sources to include information gathering, subject matter experts or social network analysis. Further, the strength of relationships between actors in social networks are often assumed to be all equal. However, relationships range from strong familial like relationships to weak casual relationships. The research developed in this thesis uses actor characteristics, relationship strength and location theory to identify key individuals in a social network that are strategically located to influence, intercept, strengthen or disrupt data flow between a set of actors. In this technique, actor characteristics and relationship strength are used as inputs into the analysis and the output is a set of actors which satisfies the desired objective and the constraints of the given problem. This extends the tool set of social network analysis to targeting of actors based on actor characteristics, relationship strength and network structure
    • …
    corecore