42 research outputs found

    Deviation measures in stochastic programming with mixed-integer recourse

    Get PDF
    Stochastic programming offers a way to treat uncertainty in decision problems. In particular, it allows the minimization of risk. We consider mean-risk models involving deviation measures, as for instance the standard deviation and the semideviation, and discuss these risk measures in the framework of stochastic dominance as well as in the framework of coherent risk measures. We derive statements concerning the structure and the stability of the resulting optimization problems whereby we emphasize on models including integrality requirements on some decision variables. Then we propose decomposition algorithms for the mean-risk models under consideration and present numerical results for two stochastic programming applications

    Static allocation of computation to processors in multicomputers

    Get PDF

    A Survey of Pipelined Workflow Scheduling: Models and Algorithms

    Get PDF
    International audienceA large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing task-, data-, pipelined-, and/or replicated-parallelism. The scheduling problem that encompasses all of these techniques is called pipelined workflow scheduling, and it has been widely studied in the last decade. Multiple models and algorithms have flourished to tackle various programming paradigms, constraints, machine behaviors or optimization goals. This paper surveys the field by summing up and structuring known results and approaches

    Baechi: Fast Device Placement of Machine Learning Graphs

    Full text link
    Machine Learning graphs (or models) can be challenging or impossible to train when either devices have limited memory, or models are large. To split the model across devices, learning-based approaches are still popular. While these result in model placements that train fast on data (i.e., low step times), learning-based model-parallelism is time-consuming, taking many hours or days to create a placement plan of operators on devices. We present the Baechi system, the first to adopt an algorithmic approach to the placement problem for running machine learning training graphs on small clusters of memory-constrained devices. We integrate our implementation of Baechi into two popular open-source learning frameworks: TensorFlow and PyTorch. Our experimental results using GPUs show that: (i) Baechi generates placement plans 654 X - 206K X faster than state-of-the-art learning-based approaches, and (ii) Baechi-placed model's step (training) time is comparable to expert placements in PyTorch, and only up to 6.2% worse than expert placements in TensorFlow. We prove mathematically that our two algorithms are within a constant factor of the optimal. Our work shows that compared to learning-based approaches, algorithmic approaches can face different challenges for adaptation to Machine learning systems, but also they offer proven bounds, and significant performance benefits.Comment: Extended version of SoCC 2020 paper: https://dl.acm.org/doi/10.1145/3419111.342130

    Improved Mixed-Integer Programming Models for Multiprocessor Scheduling with Communication Delays

    Get PDF
    We revise existing and introduce new mixed-integer programming models for the Multiprocessor Scheduling Problem with Communication Delays. At first, we show how to provably reduce the number of product variables necessary to explicitly linearize the so-called packing formulation that contains bilinear terms. Then, we reveal that the feasible region of almost all existing formulations contains redundant solutions and formulate new constraints in order to exclude these. At the same time, by exploiting further structural properties, the models are improved concerning their size, strength, and modeling complexity. The discussion of these improvements leads to new much more compact formulations which are then experimentally compared with each other and with other formulations from the literature. We set up a realistic scenario with a preprocessing of the task graphs, delivering the gained information equally to all the tested models and evaluate not only running times but also the obtained lower and upper bounds on the makespan objective for unsolved instances of a large scale benchmark set

    Model for integrating the electricity cost consumption and power demand into aggregate production planning

    Get PDF
    The constant increases in electricity tax costs and the mandatory contracting of power demand in advance by companies connected to the high-voltage electrical system drive organizations to improve energy planning in their production processes. In addition, market uncertainties make only stochastic methods insufficient for forecasting future production demand. To fill this gap, this study proposes a model that integrates the cost with electricity consumption and power demand into the aggregate production planning, considering the market uncertainties. The model was empirically applied in the food industry, considering a family of potato chips products. From the collected data, a demand forecast was carried out for a later realization of the aggregate planning, using the Holt–Winters forecast model. Before modeling, the new energy demand was calculated, and finally, the model solution verification was performed. In the case study, after application, it was possible to reduce two workers and a cost reduction of R$ 14,288.00. Moreover, the proposal managed to define a power demand that minimized the costs of electric energy and the total costs of the aggregate production planning.Campus Lima Centr

    Scheduling time-critical instructions on RISC machines

    Full text link

    Edge computing infrastructure for 5G networks: a placement optimization solution

    Get PDF
    This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge

    A Model-based Design Framework for Application-specific Heterogeneous Systems

    Get PDF
    The increasing heterogeneity of computing systems enables higher performance and power efficiency. However, these improvements come at the cost of increasing the overall complexity of designing such systems. These complexities include constructing implementations for various types of processors, setting up and configuring communication protocols, and efficiently scheduling the computational work. The process for developing such systems is iterative and time consuming, with no well-defined performance goal. Current performance estimation approaches use source code implementations that require experienced developers and time to produce. We present a framework to aid in the design of heterogeneous systems and the performance tuning of applications. Our framework supports system construction: integrating custom hardware accelerators with existing cores into processors, integrating processors into cohesive systems, and mapping computations to processors to achieve overall application performance and efficient hardware usage. It also facilitates effective design space exploration using processor models (for both existing and future processors) that do not require source code implementations to estimate performance. We evaluate our framework using a variety of applications and implement them in systems ranging from low power embedded systems-on-chip (SoC) to high performance systems consisting of commercial-off-the-shelf (COTS) components. We show how the design process is improved, reducing the number of design iterations and unnecessary source code development ultimately leading to higher performing efficient systems
    corecore