803 research outputs found

    Querying Probabilistic Neighborhoods in Spatial Data Sets Efficiently

    Full text link
    \newcommand{\dist}{\operatorname{dist}} In this paper we define the notion of a probabilistic neighborhood in spatial data: Let a set PP of nn points in Rd\mathbb{R}^d, a query point qRdq \in \mathbb{R}^d, a distance metric \dist, and a monotonically decreasing function f:R+[0,1]f : \mathbb{R}^+ \rightarrow [0,1] be given. Then a point pPp \in P belongs to the probabilistic neighborhood N(q,f)N(q, f) of qq with respect to ff with probability f(\dist(p,q)). We envision applications in facility location, sensor networks, and other scenarios where a connection between two entities becomes less likely with increasing distance. A straightforward query algorithm would determine a probabilistic neighborhood in Θ(nd)\Theta(n\cdot d) time by probing each point in PP. To answer the query in sublinear time for the planar case, we augment a quadtree suitably and design a corresponding query algorithm. Our theoretical analysis shows that -- for certain distributions of planar PP -- our algorithm answers a query in O((N(q,f)+n)logn)O((|N(q,f)| + \sqrt{n})\log n) time with high probability (whp). This matches up to a logarithmic factor the cost induced by quadtree-based algorithms for deterministic queries and is asymptotically faster than the straightforward approach whenever N(q,f)o(n/logn)|N(q,f)| \in o(n / \log n). As practical proofs of concept we use two applications, one in the Euclidean and one in the hyperbolic plane. In particular, our results yield the first generator for random hyperbolic graphs with arbitrary temperatures in subquadratic time. Moreover, our experimental data show the usefulness of our algorithm even if the point distribution is unknown or not uniform: The running time savings over the pairwise probing approach constitute at least one order of magnitude already for a modest number of points and queries.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-44543-4_3

    Data Management and Mining in Astrophysical Databases

    Full text link
    We analyse the issues involved in the management and mining of astrophysical data. The traditional approach to data management in the astrophysical field is not able to keep up with the increasing size of the data gathered by modern detectors. An essential role in the astrophysical research will be assumed by automatic tools for information extraction from large datasets, i.e. data mining techniques, such as clustering and classification algorithms. This asks for an approach to data management based on data warehousing, emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Clustering and classification techniques, on large datasets, pose additional requirements: computational and memory scalability with respect to the data size, interpretability and objectivity of clustering or classification results. In this study we address some possible solutions.Comment: 10 pages, Late

    A Benchmark and analysis of spatial data structures for physical simulations

    Get PDF
    Collision detection is an issue in physical simulations; without it simulations are inaccurate. Unfortunately, effective collision detection can require a significant amount of computational power. To reduce the number of computations and make the problem more tractable, computer scientists have used date structures to partition the system. This removes the need to have every single partical check for possible collisions with every other particle in the system; however, generic data structures typically do not work as well as specialized data structures, so this has led to the creation of multiple spatial data structures. Some spatial data structures and algorithms were customized and created to optimize memory usage while others have been made to increase speed. This project seeks to compare spatial data structures in systems with uniformly and non-uniformly distributed particles, while varying the number of particles and the filling factor. The results of this project should provide useful information to those doing general collisional simulations, such as physicists and engineers

    The OTree: multidimensional indexing with efficient data sampling for HPC

    Get PDF
    Spatial big data is considered an essential trend in future scientific and business applications. Indeed, research instruments, medical devices, and social networks generate hundreds of petabytes of spatial data per year. However, many authors have pointed out that the lack of specialized frameworks for multidimensional Big Data is limiting possible applications and precluding many scientific breakthroughs. Paramount in achieving High-Performance Data Analytics is to optimize and reduce the I/O operations required to analyze large data sets. To do so, we need to organize and index the data according to its multidimensional attributes. At the same time, to enable fast and interactive exploratory analysis, it is vital to generate approximate representations of large datasets efficiently. In this paper, we propose the Outlook Tree (or OTree), a novel Multidimensional Indexing with efficient data Sampling (MIS) algorithm. The OTree enables exploratory analysis of large multidimensional datasets with arbitrary precision, a vital missing feature in current distributed data management solutions. Our algorithm reduces the indexing overhead and achieves high performance even for write-intensive HPC applications. Indeed, we use the OTree to store the scientific results of a study on the efficiency of drug inhalers. Then we compare the OTree implementation on Apache Cassandra, named Qbeast, with PostgreSQL and plain storage. Lastly, we demonstrate that our proposal delivers better performance and scalability.Peer ReviewedPostprint (author's final draft

    Using simulations to predict the genetic connectivity of the Mojave desert tortoise

    Get PDF
    The Mojave desert tortoise is a threatened species that is facing habitat fragmentation from human development. Understanding the impact of fragmentation on this species is critical for developing appropriate conservation actions, but the effects of habitat fragmentation are often delayed, making it difficult to assess the impacts of recent landscape change. One tool often used to predict the impacts of fragmentation are agent-based models, which simulate the behavior and life-history of individual “agents”. Agent-based models allow researchers to investigate the impacts of habitat fragmentation under many scenarios, which is useful for guiding conservation actions. However, because agent-based models are computationally intense, they are often limited to small spatial extents and low numbers of agents – while performing these simulations at large scales could lead to important insights, this is often infeasible.In this dissertation, I use a computationally efficient agent-based model to assess the impact of anthropogenic development on the range-wide genetic connectivity of the Mojave desert tortoise. In Chapter 1, I describe the quadtree R package, which implements the region quadtree data structure in C++ and makes it available to the R programming environment – using this data structure increases the speed of the agent-based model. In Chapter 2, I calibrate and validate an agent-based model for predicting desert tortoise genetic connectivity. In Chapter 3, I use the model to make range-wide projections of the influence of anthropogenic development on desert tortoise genetic connectivity

    A distributed Quadtree Dictionary approach to multi-resolution visualization of scattered neutron data

    Get PDF
    Grid computing is described as dependable, seamless, pervasive access to resources and services, whereas mobile computing allows the movement of people from place to place while staying connected to resources at each location. Mobile grid computing is a new computing paradigm, which joins these two technologies by enabling access to the collection of resources within a user\u27s virtual organization while still maintaining the freedom of mobile computing through a service paradigm. A major problem in virtual organization is needs mismatch, in which one resources requests a service from another resources it is unable to fulfill, since virtual organizations are necessarily heterogeneous collections of resources. In this dissertation we propose a solution to the needs mismatch problem in the case of high energy physics data. Specifically, we propose a Quadtree Dictionary (QTD) algorithm to provide lossless, multi-resolution compression of datasets and enable their visualization on devices of all capabilities. As a prototype application, we extend the Integrated Spectral Analysis Workbench (ISAW) developed at the Intense Pulsed Neutron Source Division of the Argonne National Laboratory into a mobile Grid application, Mobile ISAW. In this dissertation we compare our QTD algorithm with several existing compression techniques on ISAW\u27s Single-Crystal Diffractometer (SCD) datasets. We then extend our QTD algorithm to a distributed setting and examine its effectiveness on the next generation of SCD datasets. In both a serial and distributed setting, our QTD algorithm performs no worse than existing techniques such as the square wavelet transform in terms of energy conservation, while providing the worst-case savings of 8:1

    Efficient Generating And Processing Of Large-Scale Unstructured Meshes

    Get PDF
    Unstructured meshes are used in a variety of disciplines to represent simulations and experimental data. Scientists who want to increase accuracy of simulations by increasing resolution must also increase the size of the resulting dataset. However, generating and processing a extremely large unstructured meshes remains a barrier. Researchers have published many parallel Delaunay triangulation (DT) algorithms, often focusing on partitioning the initial mesh domain, so that each rectangular partition can be triangulated in parallel. However, the comproblems for this method is how to merge all triangulated partitions into a single domain-wide mesh or the significant cost for communication the sub-region borders. We devised a novel algorithm --Triangulation of Independent Partitions in Parallel (TIPP) to deal with very large DT problems without requiring inter-processor communication while still guaranteeing the Delaunay criteria. The core of the algorithm is to find a set of independent} partitions such that the circumcircles of triangles in one partition do not enclose any vertex in other partitions. For this reason, this set of independent partitions can be triangulated in parallel without affecting each other. The results of mesh generation is the large unstructured meshes including vertex index and vertex coordinate files which introduce a new challenge \-- locality. Partitioning unstructured meshes to improve locality is a key part of our own approach. Elements that were widely scattered in the original dataset are grouped together, speeding data access. For further improve unstructured mesh partitioning, we also described our new approach. Direct Load which mitigates the challenges of unstructured meshes by maximizing the proportion of useful data retrieved during each read from disk, which in turn reduces the total number of read operations, boosting performance
    corecore