4,301 research outputs found

    Equivalence Classes and Conditional Hardness in Massively Parallel Computations

    Get PDF
    The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model

    Predicting the metabolic energy costs of bipedalism using evolutionary robotics

    Get PDF
    To understand the evolution of bipedalism among the homnoids in an ecological context we need to be able to estimate theenerrgetic cost of locomotion in fossil forms. Ideally such an estimate would be based entirely on morphology since, except for the rare instances where footprints are preserved, this is hte only primary source of evidence available. In this paper we use evolutionary robotics techniques (genetic algoritms, pattern generators and mechanical modeling) to produce a biomimentic simulation of bipedalism based on human body dimensions. The mechnaical simulation is a seven-segment, two-dimensional model with motive force provided by tension generators representing the major muscle groups acting around the lower-limb joints. Metabolic energy costs are calculated from the muscel model, and bipedal gait is generated using a finite-state pattern generator whose parameters are produced using a genetic algorithm with locomotor economy (maximum distance for a fixed energy cost) as the fitness criterion. The model is validated by comparing the values it generates with those for modern humans. The result (maximum efficiency of 200 J m-1) is within 15% of the experimentally derived value, which is very encouraging and suggests that this is a useful analytic technique for investigating the locomotor behaviour of fossil forms. Initial work suggests that in the future this technique could be used to estimate other locomotor parameters such as top speed. In addition, the animations produced by this technique are qualitatively very convincing, which suggests that this may also be a useful technique for visualizing bipedal locomotion

    Gaussian Processes with Context-Supported Priors for Active Object Localization

    Full text link
    We devise an algorithm using a Bayesian optimization framework in conjunction with contextual visual data for the efficient localization of objects in still images. Recent research has demonstrated substantial progress in object localization and related tasks for computer vision. However, many current state-of-the-art object localization procedures still suffer from inaccuracy and inefficiency, in addition to failing to provide a principled and interpretable system amenable to high-level vision tasks. We address these issues with the current research. Our method encompasses an active search procedure that uses contextual data to generate initial bounding-box proposals for a target object. We train a convolutional neural network to approximate an offset distance from the target object. Next, we use a Gaussian Process to model this offset response signal over the search space of the target. We then employ a Bayesian active search for accurate localization of the target. In experiments, we compare our approach to a state-of-theart bounding-box regression method for a challenging pedestrian localization task. Our method exhibits a substantial improvement over this baseline regression method.Comment: 10 pages, 4 figure

    The cosmological simulation code GADGET-2

    Full text link
    We discuss the cosmological simulation code GADGET-2, a new massively parallel TreeSPH code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). Our implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the `tree'-method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different timesteps. Individual and adaptive short-range timesteps may also be employed. The domain decomposition used in the parallelisation algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body simulation with more than 10^10 dark matter particles, reaching a homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has also been used to carry out very large cosmological SPH simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. We present the algorithms used by the code and discuss their accuracy and performance using a number of test problems. GADGET-2 is publicly released to the research community.Comment: submitted to MNRAS, 31 pages, 20 figures (reduced resolution), code available at http://www.mpa-garching.mpg.de/gadge

    General scores for accessibility and inequality measures in urban areas

    Get PDF
    In the last decades, the acceleration of urban growth has led to an unprecedented level of urban interactions and interdependence. This situation calls for a significant effort among the scientific community to come up with engaging and meaningful visualizations and accessible scenario simulation engines. The present paper gives a contribution in this direction by providing general methods to evaluate accessibility in cities based on public transportation data. Through the notion of isochrones, the accessibility quantities proposed measure the performance of transport systems at connecting places and people in urban systems. Then we introduce scores rank cities according to their overall accessibility. We highlight significant inequalities in the distribution of these measures across the population, which are found to be strikingly similar across various urban environments. Our results are released through the interactive platform: www.citychrone.org, aimed at providing the community at large with a useful tool for awareness and decision-making

    Extreme Scale De Novo Metagenome Assembly

    Full text link
    Metagenome assembly is the process of transforming a set of short, overlapping, and potentially erroneous DNA segments from environmental samples into the accurate representation of the underlying microbiomes's genomes. State-of-the-art tools require big shared memory machines and cannot handle contemporary metagenome datasets that exceed Terabytes in size. In this paper, we introduce the MetaHipMer pipeline, a high-quality and high-performance metagenome assembler that employs an iterative de Bruijn graph approach. MetaHipMer leverages a specialized scaffolding algorithm that produces long scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is end-to-end parallelized using the Unified Parallel C language and therefore can run seamlessly on shared and distributed-memory systems. Experimental results show that MetaHipMer matches or outperforms the state-of-the-art tools in terms of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and is able to assemble previously intractable grand challenge metagenomes. We demonstrate the unprecedented capability of MetaHipMer by computing the first full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion reads - size 2.6 TBytes.Comment: Accepted to SC1

    One machine, one minute, three billion tetrahedra

    Full text link
    This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure, an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three. Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second. We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh

    Multi-agent simulation: new approaches to exploring space-time dynamics in GIS

    Get PDF
    As part of the long term quest to develop more disaggregate, temporally dynamic models of spatial behaviour, micro-simulation has evolved to the point where the actions of many individuals can be computed. These multi-agent systems/simulation(MAS) models are a consequence of much better micro data, more powerful and user-friendly computer environments often based on parallel processing, and the generally recognised need in spatial science for modelling temporal process. In this paper, we develop a series of multi-agent models which operate in cellular space.These demonstrate the well-known principle that local action can give rise to global pattern but also how such pattern emerges as the consequence of positive feedback and learned behaviour. We first summarise the way cellular representation is important in adding new process functionality to GIS, and the way this is effected through ideas from cellular automata (CA) modelling. We then outline the key ideas of multi-agent simulation and this sets the scene for three applications to problems involving the use of agents to explore geographic space. We first illustrate how agents can be programmed to search route networks, finding shortest routes in adhoc as well as structured ways equivalent to the operation of the Bellman-Dijkstra algorithm. We then demonstrate how the agent-based approach can be used to simulate the dynamics of water flow, implying that such models can be used to effectively model the evolution of river systems. Finally we show how agents can detect the geometric properties of space, generating powerful results that are notpossible using conventional geometry, and we illustrate these ideas by computing the visual fields or isovists associated with different viewpoints within the Tate Gallery.Our forays into MAS are all based on developing reactive agent models with minimal interaction and we conclude with suggestions for how these models might incorporate cognition, planning, and stronger positive feedbacks between agents

    Adaptive Massively Parallel Constant-Round Tree Contraction

    Get PDF
    Miller and Reif's FOCS'85 classic and fundamental tree contraction algorithm is a broadly applicable technique for the parallel solution of a large number of tree problems. Additionally it is also used as an algorithmic design technique for a large number of parallel graph algorithms. In all previously explored models of computation, however, tree contractions have only been achieved in Ω(logn)\Omega(\log n) rounds of parallel run time. In this work, we not only introduce a generalized tree contraction method but also show it can be computed highly efficiently in O(1/ϵ3)O(1/\epsilon^3) rounds in the Adaptive Massively Parallel Computing (AMPC) setting, where each machine has O(nϵ)O(n^\epsilon) local memory for some 0<ϵ<10 < \epsilon < 1. AMPC is a practical extension of Massively Parallel Computing (MPC) which utilizes distributed hash tables. In general, MPC is an abstract model for MapReduce, Hadoop, Spark, and Flume which are currently widely used across industry and has been studied extensively in the theory community in recent years. Last but not least, we show that our results extend to multiple problems on trees, including but not limited to maximum and maximal matching, maximum and maximal independent set, tree isomorphism testing, and more.Comment: 35 pages, 3 figures, to be published in Innovations in Theoretical Computer Science (ITCS
    corecore