4 research outputs found

    On the complexity of multi-round divisible load scheduling

    Get PDF
    In this paper we study master-worker scheduling of divisible loads in heterogeneous distributed systems. Divisible loads are computations that can be arbitrarily divided into independent ``chunks'', which can then be processed in parallel. In multi-round scheduling load is sent to each worker as several chunks rather than as a single one. Solving the divisible load scheduling (DLS) problem entails determining the subset of workers that should be used, the sequence of communication to these workers, and the sizes of each load chunk. We first state and establish an optimality principle in the general case. Then we establish a new complexity result by showing that a DLS problem, whose complexity has been open for a long time, is in fact NP-hard, even in the one-round case. We also show that this problem is pseudopolynomially solvable under certain special conditions. Finally, we present a deep survey on algorithms and heuristics for solving the multi-round DLS problem

    Large-Scale Parallelization of Human Migration Simulation

    Get PDF
    Forced displacement of people worldwide, for example, due to violent conflicts, is common in the modern world, and today more than 82 million people are forcibly displaced. This puts the problem of migration at the forefront of the most important problems of humanity. The Flee simulation code is an agent-based modeling tool that can forecast population displacements in civil war settings, but performing accurate simulations requires nonnegligible computational capacity. In this article, we present our approach to Flee parallelization for fast execution on multicore platforms, as well as discuss the computational complexity of the algorithm and its implementation. We benchmark parallelized code using supercomputers equipped with AMD EPYC Rome 7742 and Intel Xeon Platinum 8268 processors and investigate its performance across a range of alternative rule sets, different refinements in the spatial representation, and various numbers of agents representing displaced persons. We find that Flee scales excellently to up to 8192 cores for large cases, although very detailed location graphs can impose a large initialization time overhead

    Open Access to the Digital Biodiversity Database: A Comprehensive Functional Model of the Natural History Collections

    No full text
    The Natural History Collections of Adam Mickiewicz University (AMUNATCOLL) in Poznań contain over 2.2 million specimens. Until recently, access to the collections was limited to specialists and was challenging because of the analogue data files. Therefore, this paper presents a new approach to data sharing called the Scientific, Educational, Public, and Practical Use (SEPP) Model. Since the stakeholder group is broad, the SEPP Model assumes the following key points: full open access to the digitized collections, the structure of metadata in accordance with certain standards, and a versatile tool set for data mining or statistical and spatial analysis. The SEPP Model was implemented in the AMUNATCOLL IT system, which consists of a web portal equipped with a wide set of explorative functionalities tailored to different user groups: scientists, students, officials, and nature enthusiasts. An integral part of the system is a mobile application designed for field surveys, enabling users to conduct studies comparing their own field data and AMUNATCOLL data. The AMUNATCOLL IT database contains digital data on specimens, biological samples, bibliographic sources, and multimedia nature documents. The metadata structure was developed in accordance with ABCD 2.06 and Darwin Core standards
    corecore