3,174 research outputs found

    Multi-criteria scheduling of pipeline workflows

    Get PDF
    Mapping workflow applications onto parallel platforms is a challenging problem, even for simple application patterns such as pipeline graphs. Several antagonist criteria should be optimized, such as throughput and latency (or a combination). In this paper, we study the complexity of the bi-criteria mapping problem for pipeline graphs on communication homogeneous platforms. In particular, we assess the complexity of the well-known chains-to-chains problem for different-speed processors, which turns out to be NP-hard. We provide several efficient polynomial bi-criteria heuristics, and their relative performance is evaluated through extensive simulations

    Reclaiming the energy of a schedule: models and algorithms

    Get PDF
    We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.Comment: A two-page extended abstract of this work appeared as a short presentation in SPAA'2011, while the long version has been accepted for publication in "Concurrency and Computation: Practice and Experience

    The calcium-sensing receptor as a regulator of cellular fate in normal and pathological conditions

    Get PDF
    The calcium-sensing receptor (CaSR) belongs to the evolutionarily conserved family of plasma membrane G protein-coupled receptors (GPCRs). Early studies identified an essential role for the CaSR in systemic calcium homeostasis through its ability to sense small changes in circulating calcium concentration and to couple this information to intracellular signaling pathways that influence parathyroid hormone secretion. However, the presence of CaSR protein in tissues is not directly involved in regulating mineral ion homeostasis points to a role for the CaSR in other cellular functions including the control of cellular proliferation, differentiation and apoptosis. This position at the crossroads of cellular fate designates the CaSR as an interesting study subject is likely to be involved in a variety of previously unconsidered human pathologies, including cancer, atherosclerosis and Alzheimer's disease. Here, we will review the recent discoveries regarding the relevance of CaSR signaling in development and disease. Furthermore, we will discuss the rational for developing and using CaSR-based therapeutics

    Co-Scheduling Algorithms for High-Throughput Workload Execution

    Get PDF
    This paper investigates co-scheduling algorithms for processing a set of parallel applications. Instead of executing each application one by one, using a maximum degree of parallelism for each of them, we aim at scheduling several applications concurrently. We partition the original application set into a series of packs, which are executed one by one. A pack comprises several applications, each of them with an assigned number of processors, with the constraint that the total number of processors assigned within a pack does not exceed the maximum number of available processors. The objective is to determine a partition into packs, and an assignment of processors to applications, that minimize the sum of the execution times of the packs. We thoroughly study the complexity of this optimization problem, and propose several heuristics that exhibit very good performance on a variety of workloads, whose application execution times model profiles of parallel scientific codes. We show that co-scheduling leads to to faster workload completion time and to faster response times on average (hence increasing system throughput and saving energy), for significant benefits over traditional scheduling from both the user and system perspectives

    Model Driven Mutation Applied to Adaptative Systems Testing

    Get PDF
    Dynamically Adaptive Systems modify their behav- ior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical sys- tems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correct- ness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary results on an adaptive web server highlight the difficulty of killing mutants for adaptive systems, and thus the difficulty of generating efficient tests.Comment: IEEE International Conference on Software Testing, Verification and Validation, Mutation Analysis Workshop (Mutation 2011), Berlin : Allemagne (2011

    Iso-Level CAFT: How to Tackle the Combination of Communication Overhead Reduction and Fault Tolerance Scheduling

    Get PDF
    To schedule precedence task graphs in a more realistic framework, we introduce an efficient fault tolerant scheduling algorithm that is both contention-aware and capable of supporting ε\varepsilon arbitrary fail-silent (fail-stop) processor failures. The design of the proposed algorithm which we call Iso-Level CAFT, is motivated by (i) the search for a better load-balance and (ii) the generation of fewer communications. These goals are achieved by scheduling a chunk of ready tasks simultaneously, which enables for a global view of the potential communications. Our goal is to minimize the total execution time, or latency, while tolerating an arbitrary number of processor failures. Our approach is based on an active replication scheme to mask failures, so that there is no need for detecting and handling such failures. Major achievements include a low complexity, and a drastic reduction of the number of additional communications induced by the replication mechanism. The experimental results fully demonstrate the usefulness of Iso-Level~CAFT

    A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis

    Get PDF
    International audiencePresenting a complementary perspective to standard books on algorithms, A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis provides a roadmap for readers to determine the difficulty of an algorithmic problem by finding an optimal solution or proving complexity results. It gives a practical treatment of algorithmic complexity and guides readers in solving algorithmic problems. Divided into three parts, the book offers a comprehensive set of problems with solutions as well as in-depth case studies that demonstrate how to assess the complexity of a new problem. Part I helps readers understand the main design principles and design efficient algorithms. Part II covers polynomial reductions from NP-complete problems and approaches that go beyond NP-completeness. Part III supplies readers with tools and techniques to evaluate problem complexity, including how to determine which instances are polynomial and which are NP-hard. Drawing on the authors' classroom-tested material, this text takes readers step by step through the concepts and methods for analyzing algorithmic complexity. Through many problems and detailed examples, readers can investigate polynomial-time algorithms and NP-completeness and beyond

    An energy recondensation method using the discrete generalized multigroup energy expansion theory

    Get PDF
    In this paper, the discrete generalized multigroup (DGM) method was used to recondense the coarse group cross-sections using the core level solution, thus providing a correction for neighboring effect found at the core level. This approach was tested using a discrete ordinates implementation in both 1-D and 2-D. Results indicate that 2 or 3 iterations can substantially improve the flux and fission density errors associated with strong interfacial spectral changes as found in the presence of strong absorbers, reflector of mixed-oxide fuel. The methodology is also proven to be fully consistent with the multigroup methodology as long as a flat-flux approximation is used spatially
    • …
    corecore