23,238 research outputs found

    Trade-Offs in Distributed Interactive Proofs

    Get PDF
    The study of interactive proofs in the context of distributed network computing is a novel topic, recently introduced by Kol, Oshman, and Saxena [PODC 2018]. In the spirit of sequential interactive proofs theory, we study the power of distributed interactive proofs. This is achieved via a series of results establishing trade-offs between various parameters impacting the power of interactive proofs, including the number of interactions, the certificate size, the communication complexity, and the form of randomness used. Our results also connect distributed interactive proofs with the established field of distributed verification. In general, our results contribute to providing structure to the landscape of distributed interactive proofs

    LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing

    Get PDF
    LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft

    Resource-constrained project scheduling.

    Get PDF
    Abstract: Resource-constrained project scheduling involves the scheduling of project activities subject to precedence and resource constraints in order to meet the objective(s) in the best possible way. The area covers a wide variety of problem types. The objective of this paper is to provide a survey of what we believe are important recent in the area . Our main focus will be on the recent progress made in and the encouraging computational experience gained with the use of optimal solution procedures for the basic resource-constrained project scheduling problem (RCPSP) and important extensions. The RCPSP involves the scheduling of a project its duration subject to zero-lag finish-start precedence constraints of the PERT/CPM type and constant availability constraints on the required set of renewable resources. We discuss recent striking advances in dealing with this problem using a new depth-first branch-and-bound procedure, elaborating on the effective and efficient branching scheme, bounding calculations and dominance rules, and discuss the potential of using truncated branch-and-bound. We derive a set of conclusions from the research on optimal solution procedures for the basis RCPSP and subsequently illustrate how effective and efficient branching rules and several of the strong dominance and bounding arguments can be extended to a rich and realistic variety of related problems. The preemptive resource-constrained project scheduling problem (PRCPSP) relaxes the nonpreemption condition of the RCPSP, thus allowing activities to be interrupted at integer points in time and resumed later without additional penalty cost. The generalized resource-constrained project scheduling (GRCPSP) extends the RCPSP to the case of precedence diagramming type of precedence constraints (minimal finish-start, start-start, start-finish, finish-finish precedence relations), activity ready times, deadlines and variable resource availability's. The resource-constrained project scheduling problem with generalized precedence relations (RCPSP-GPR) allows for start-start, finish-start and finish-finish constraints with minimal and maximal time lags. The MAX-NPV problem aims at scheduling project activities in order to maximize the net present value of the project in the absence of resource constraints. The resource-constrained project scheduling problem with discounted cash flows (RCPSP-DC) aims at the same non-regular objective in the presence of resource constraints. The resource availability cost problem (RACP) aims at determining the cheapest resource availability amounts for which a feasible solution exists that does not violate the project deadline. In the discrete time/cost trade-off problem (DTCTP) the duration of an activity is a discrete, non-increasing function of the amount of a single nonrenewable resource committed to it. In the discrete time/resource trade-off problem (DTRTP) the duration of an activity is a discrete, non-increasing function of the amount of a single renewable resource. Each activity must then be scheduled in one of its possible execution modes. In addition to time/resource trade-offs, the multi-mode project scheduling problem (MRCPSP) allows for resource/resource trade-offs and constraints on renewable, nonrenewable and doubly-constrained resources. We report on recent computational results and end with overall conclusions and suggestions for future research.Scheduling; Optimal;

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN

    Performance Analysis of a Novel GPU Computation-to-core Mapping Scheme for Robust Facet Image Modeling

    Get PDF
    Though the GPGPU concept is well-known in image processing, much more work remains to be done to fully exploit GPUs as an alternative computation engine. This paper investigates the computation-to-core mapping strategies to probe the efficiency and scalability of the robust facet image modeling algorithm on GPUs. Our fine-grained computation-to-core mapping scheme shows a significant performance gain over the standard pixel-wise mapping scheme. With in-depth performance comparisons across the two different mapping schemes, we analyze the impact of the level of parallelism on the GPU computation and suggest two principles for optimizing future image processing applications on the GPU platform

    Parks, Buffer Zones, and Costly Enforcement

    Get PDF
    The reality of protected area management is that enforcing forest and park boundaries is costly and so most likely incomplete, due in part to the pressures exerted on the boundaries by local people who often have traditionally relied on the park resources. Buffer zones are increasingly being proposed and implemented to protect both forest resources and livelihoods. Developing a spatially-explicit optimal enforcement model, this paper demonstrates that there is a trade-off between the amount spent on enforcement, the size of a formal buffer zone, and the extent to which a forest can be protected from illegal extraction. Indeed, given the reality of limited enforcement budgets, a forest manager with a mandate to protect a whole forest may in fact end up doing a worse job than one who is able to incorporate an appropriately sized buffer zone into their management plans that, combined with more effective enforcement of a smaller exclusion zone, provide the appropriate incentives for villagers to extract only in the periphery of the forest, rather than venture further into the forest.
    • …
    corecore