11,083 research outputs found

    Virtual Network Embedding Approximations: Leveraging Randomized Rounding

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The Virtual Network Embedding Problem (VNEP) captures the essence of many resource allocation problems. In the VNEP, customers request resources in the form of Virtual Networks. An embedding of a virtual network on a shared physical infrastructure is the joint mapping of (virtual) nodes to physical servers together with the mapping of (virtual) edges onto paths in the physical network connecting the respective servers. This work initiates the study of approximation algorithms for the VNEP for general request graphs. Concretely, we study the offline setting with admission control: given multiple requests, the task is to embed the most profitable subset while not exceeding resource capacities. Our approximation is based on the randomized rounding of Linear Programming (LP) solutions. Interestingly, we uncover that the standard LP formulation for the VNEP exhibits an inherent structural deficit when considering general virtual network topologies: its solutions cannot be decomposed into valid embeddings. In turn, focusing on the class of cactus request graphs, we devise a novel LP formulation, whose solutions can be decomposed. Proving performance guarantees of our rounding scheme, we obtain the first approximation algorithm for the VNEP in the resource augmentation model. We propose different types of rounding heuristics and evaluate their performance in an extensive computational study. Our results indicate that good solutions can be achieved even without resource augmentations. Specifically, heuristical rounding achieves 77.2% of the baseline’s profit on average while respecting capacities.BMBF, 01IS12056, Software Campus GrantEC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Optimization of the long-term planning of supply chains with decaying performance

    Get PDF
    This master's thesis addresses the optimization of supply and distribution chains considering the effect that equipment aging may cause over the performance of facilities involved in the process. The decaying performance of the facilities is modeled as an exponential equation and can be either physical or economic, thus giving rise to a novel mixed integer non-linear programming (MINLP) formulation. The optimization model has been developed based on a typical chemical supply chain. Thus, the best long-term investment plan has to be determined given production nodes, their production capacity and expected evolution; aggregated consumption nodes (urban or industrial districts) and their lumped demand (and expected evolution); actual and potential distribution nodes; distances between the nodes of the network; and a time horizon. The model includes the balances in each node, a general decaying performance function, and a cost function, as well as constraints to be satisfied. Hence, the investment plan (decision variables) consists not only on the start-up and shutdown of alternative distribution facilities, but also on the sizing of the lines satisfying the flows. The model has been implemented using GAMS optimization software. Results considering a variety of scenarios have been discussed. In addition, different approaches to the starting point for the model have been compared, showing the importance of initializing the optimization algorithm. The capabilities of the proposed approach have been tested through its application to two case studies: a natural gas network with physical decaying performance and an electricity distribution network with economic decaying performance. Each case study is solved with a different procedure to obtain results. Results demonstrate that overlooking the effect of equipment aging can lead to infeasible (for physical decaying performance) or unrealistic (for economic decaying performance) solutions in practice and show how the proposed model allows overcoming such limitations thus becoming a practical tool to support the decision-making process in the distribution secto

    High-level Counterexamples for Probabilistic Automata

    Full text link
    Providing compact and understandable counterexamples for violated system properties is an essential task in model checking. Existing works on counterexamples for probabilistic systems so far computed either a large set of system runs or a subset of the system's states, both of which are of limited use in manual debugging. Many probabilistic systems are described in a guarded command language like the one used by the popular model checker PRISM. In this paper we describe how a smallest possible subset of the commands can be identified which together make the system erroneous. We additionally show how the selected commands can be further simplified to obtain a well-understandable counterexample

    Development of transportation and supply chain problems with the combination of agent-based simulation and network optimization

    Get PDF
    Demand drives a different range of supply chain and logistics location decisions, and agent-based modeling (ABM) introduces innovative solutions to address supply chain and logistics problems. This dissertation focuses on an agent-based and network optimization approach to resolve those problems and features three research projects that cover prevalent supply chain management and logistics problems. The first case study evaluates demographic densities in Norway, Finland, and Sweden, and covers how distribution center (DC) locations can be established using a minimizing trip distance approach. Furthermore, traveling time maps are developed for each scenario. In addition, the Nordic area consisting of those three countries is analyzed and five DC location optimization results are presented. The second case study introduces transportation cost modelling in the process of collecting tree logs from several districts and transporting them to the nearest collection point. This research project presents agent-based modelling (ABM) that incorporates comprehensively the key elements of the pick-up and delivery supply chain model and designs the components as autonomous agents communicating with each other. The modelling merges various components such as GIS routing, potential facility locations, random tree log pickup locations, fleet sizing, trip distance, and truck and train transportation. The entire pick-up and delivery operation are modeled by ABM and modeling outcomes are provided by time series charts such as the number of trucks in use, facilities inventory and travel distance. In addition, various scenarios of simulation based on potential facility locations and truck numbers are evaluated and the optimal facility location and fleet size are identified. In the third case study, an agent-based modeling strategy is used to address the problem of vehicle scheduling and fleet optimization. The solution method is employed to data from a real-world organization, and a set of key performance indicators are created to assess the resolution's effectiveness. The ABM method, contrary to other modeling approaches, is a fully customized method that can incorporate extensively various processes and elements. ABM applying the autonomous agent concept can integrate various components that exist in the complex supply chain and create a similar system to assess the supply chain efficiency.Tuotteiden kysyntä ohjaa erilaisia toimitusketju- ja logistiikkasijaintipäätöksiä, ja agenttipohjainen mallinnusmenetelmä (ABM) tuo innovatiivisia ratkaisuja toimitusketjun ja logistiikan ongelmien ratkaisemiseen. Tämä väitöskirja keskittyy agenttipohjaiseen mallinnusmenetelmään ja verkon optimointiin tällaisten ongelmien ratkaisemiseksi, ja sisältää kolme tapaustutkimusta, jotka voidaan luokitella kuuluvan yleisiin toimitusketjun hallinta- ja logistiikkaongelmiin. Ensimmäinen tapaustutkimus esittelee kuinka käyttämällä väestötiheyksiä Norjassa, Suomessa ja Ruotsissa voidaan määrittää strategioita jakelukeskusten (DC) sijaintiin käyttämällä matkan etäisyyden minimoimista. Kullekin skenaariolle kehitetään matka-aikakartat. Lisäksi analysoidaan näistä kolmesta maasta koostuvaa pohjoismaista aluetta ja esitetään viisi mahdollista sijaintia optimointituloksena. Toinen tapaustutkimus esittelee kuljetuskustannusmallintamisen prosessissa, jossa puutavaraa kerätään useilta alueilta ja kuljetetaan lähimpään keräyspisteeseen. Tämä tutkimusprojekti esittelee agenttipohjaista mallinnusta (ABM), joka yhdistää kattavasti noudon ja toimituksen toimitusketjumallin keskeiset elementit ja suunnittelee komponentit keskenään kommunikoiviksi autonomisiksi agenteiksi. Mallinnuksessa yhdistetään erilaisia komponentteja, kuten GIS-reititys, mahdolliset tilojen sijainnit, satunnaiset puunhakupaikat, kaluston mitoitus, matkan pituus sekä monimuotokuljetukset. ABM:n avulla mallinnetaan noutojen ja toimituksien koko ketju ja tuloksena saadaan aikasarjoja kuvaamaan käytössä olevat kuorma-autot, sekä varastomäärät ja ajetut matkat. Lisäksi arvioidaan erilaisia simuloinnin skenaarioita mahdollisten laitosten sijainnista ja kuorma-autojen lukumäärästä sekä tunnistetaan optimaalinen toimipisteen sijainti ja tarvittava autojen määrä. Kolmannessa tapaustutkimuksessa agenttipohjaista mallinnusstrategiaa käytetään ratkaisemaan ajoneuvojen aikataulujen ja kaluston optimoinnin ongelma. Ratkaisumenetelmää käytetään dataan, joka on peräisin todellisesta organisaatiosta, ja ratkaisun tehokkuuden arvioimiseksi luodaan lukuisia keskeisiä suorituskykyindikaattoreita. ABM-menetelmä, toisin kuin monet muut mallintamismenetelmät, on täysin räätälöitävissä oleva menetelmä, joka voi sisältää laajasti erilaisia prosesseja ja elementtejä. Autonomisia agentteja soveltava ABM voi integroida erilaisia komponentteja, jotka ovat olemassa monimutkaisessa toimitusketjussa ja luoda vastaavan järjestelmän toimitusketjun tehokkuuden arvioimiseksi yksityiskohtaisesti.fi=vertaisarvioitu|en=peerReviewed

    A Tutorial on Clique Problems in Communications and Signal Processing

    Full text link
    Since its first use by Euler on the problem of the seven bridges of K\"onigsberg, graph theory has shown excellent abilities in solving and unveiling the properties of multiple discrete optimization problems. The study of the structure of some integer programs reveals equivalence with graph theory problems making a large body of the literature readily available for solving and characterizing the complexity of these problems. This tutorial presents a framework for utilizing a particular graph theory problem, known as the clique problem, for solving communications and signal processing problems. In particular, the paper aims to illustrate the structural properties of integer programs that can be formulated as clique problems through multiple examples in communications and signal processing. To that end, the first part of the tutorial provides various optimal and heuristic solutions for the maximum clique, maximum weight clique, and kk-clique problems. The tutorial, further, illustrates the use of the clique formulation through numerous contemporary examples in communications and signal processing, mainly in maximum access for non-orthogonal multiple access networks, throughput maximization using index and instantly decodable network coding, collision-free radio frequency identification networks, and resource allocation in cloud-radio access networks. Finally, the tutorial sheds light on the recent advances of such applications, and provides technical insights on ways of dealing with mixed discrete-continuous optimization problems

    Stock Management in Hospital Pharmacy using Chance-Constrained Model Predictive Control

    Get PDF
    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals.Junta de Andalucía P12-TIC-240

    Efficient cloud computing system operation strategies

    Get PDF
    Cloud computing systems have emerged as a new paradigm of computing systems by providing on demand based services which utilize large size computing resources. Service providers offer Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) to users depending on their demand and users pay only for the user resources. The Cloud system has become a successful business model and is expanding its scope through collaboration with various applications such as big data processing, Internet of Things (IoT), robotics, and 5G networks. Cloud computing systems are composed of large numbers of computing, network, and storage devices across the geographically distributed area and multiple tenants employ the cloud systems simultaneously with heterogeneous resource requirements. Thus, efficient operation of cloud computing systems is extremely difficult for service providers. In order to maximize service providers\u27 profit, the cloud systems should be able to serve large numbers of tenants while minimizing the OPerational EXpenditure (OPEX). For serving as many tenants as possible tenants using limited resources, the service providers should implement efficient resource allocation for users\u27 requirements. At the same time, cloud infrastructure consumes a significant amount of energy. According to recent disclosures, Google data centers consumed nearly 300 million watts and Facebook\u27s data centers consumed 60 million watts. Explosive traffic demand for data centers will keep increasing because of expansion of mobile and cloud traffic requirements. If service providers do not develop efficient ways for energy management in their infrastructures, this will cause significant power consumption in running their cloud infrastructures. In this thesis, we consider optimal datasets allocation in distributed cloud computing systems. Our objective is to minimize processing time and cost. Processing time includes virtual machine processing time, communication time, and data transfer time. In distributed Cloud systems, communication time and data transfer time are important component of processing time because data centers are distributed geographically. If we place data sets far from each other, this increases the communication and data transfer time. The cost objective includes virtual machine cost, communication cost, and data transfer cost. Cloud service providers charge for virtual machine usage according to usage time of virtual machine. Communication cost and transfer cost are charged based on transmission speed of data and data set size. The problem of allocating data sets to VMs in distributed heterogeneous clouds is formulated as a linear programming model with two objectives: the cost and processing time. After finding optimal solutions of each objective function, we use a heuristic approach to find the Pareto front of multi-objective linear programming problem. In the simulation experiment, we consider a heterogeneous cloud infrastructure with five different types of cloud service provider resource information, and we optimize data set placement by guaranteeing Pareto optimality of the solutions. Also, this thesis proposes an adaptive data center activation model that consolidates adaptive activation of switches and hosts simultaneously integrated with a statistical request prediction algorithm. The learning algorithm predicts user requests in predetermined interval by using a cyclic window learning algorithm. Then the data center activates an optimal number of switches and hosts in order to minimize power consumption that is based on prediction. We designed an adaptive data center activation model by using a cognitive cycle composed of three steps: data collection, prediction, and activation. In the request prediction step, the prediction algorithm forecasts a Poisson distribution parameter lambda in every determined interval by using Maximum Likelihood Estimation (MLE) and Local Linear Regression (LLR) methods. Then, adaptive activation of the data center is implemented with the predicted parameter in every interval. The adaptive activation model is formulated as a Mixed Integer Linear Programming (MILP) model. Switches and hosts are modeled as M/M/1 and M/M/c queues. In order to minimize power consumption of data centers, the model minimizes the number of activated switches, hosts, and memory modules while guaranteeing Quality of Service (QoS). Since the problem is NP-hard, we use the Simulated Annealing algorithm to solve the model. We employ Google cluster trace data to simulate our prediction model. Then, the predicted data is employed to test adaptive activation model and observed energy saving rate in every interval. In the experiment, we could observe that the adaptive activation model saves 30 to 50% of energy compared to the full operation state of data center in practical utilization rates of data centers. Network Function Virtualization (NFV) emerged as a game changer in network market for efficient operation of the network infrastructure. Since NFV transforms the dedicated physical devices designed for specific network function to software-based Virtual Machines (VMs), the network operators expect to reduce a significant Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). Softwarized VMs can be implemented on any commodity servers, so network operators can design flexible and scalable network architecture through efficient VM placement and migration algorithms. In this thesis, we study a joint problem of Virtualized Network Function (VNF) resource allocation and NFV-Service Chain (NFV-SC) placement problem in Software Defined Network (SDN) based hyper-scale distributed cloud computing infrastructure. The objective of the problem is minimizing the power consumption of the infrastructure while enforcing Service Level Agreement (SLA) of users. We employ an M/G/1/K queuing network approximation analysis for the NFV-SC model. The communication time between VNFs is considered in the NFV-SC placement because it influences the performance of NFV-SC in the highly distributed infrastructure environment. The joint problem is modeled by a Mixed Integer Non-linear Programming (MINP) model. However, the problem is intractable in large size infrastructures due to NP-hardness of the problem. We therefore propose a heuristic algorithm which splits the problem into two sub-problems: resource allocation and the NFV-SC embedding. In the numerical analysis, we could observe that the proposed algorithm outperforms the traditional bin packing algorithms in terms of power consumption and SLA assurance. In this thesis, we propose efficient cloud infrastructure management strategies from a single data center point of view to hyper-scale distributed cloud computing infrastructure for profitable cloud system operation. The management schemes are proposed with various objectives such as Quality of Service (Qos), performance, latency, and power consumption. We use efficient mathematical modeling strategies such as Linear Programming (LP), Mixed Integer Linear Programming (MILP), Mixed Integer Non-linear Programming(MINP), convex programming, queuing theory, and probabilistic modeling strategies and prove the efficiency of the proposed strategies through various simulations

    Developing an Enhanced Algorithms to Solve Mixed Integer Non-Linear Programming Problems Based on a Feasible Neighborhood Search Strategy

    Get PDF
    Engineering optimization problems often involve nonlinear objective functions, which can capture complex relationships and dependencies between variables. This study focuses on a unique nonlinear mathematics programming problem characterized by a subset of variables that can only take discrete values and are linearly separable from the continuous variables. The combination of integer variables and non-linearities makes this problem much more complex than traditional nonlinear programming problems with only continuous variables. Furthermore, the presence of integer variables can result in a combinatorial explosion of potential solutions, significantly enlarging the search space and making it challenging to explore effectively. This issue becomes especially challenging for larger problems, leading to long computation times or even infeasibility. To address these challenges, we propose a method that employs the "active constraint" approach in conjunction with the release of nonbasic variables from their boundaries. This technique compels suitable non-integer fundamental variables to migrate to their neighboring integer positions. Additionally, we have researched selection criteria for choosing a nonbasic variable to use in the integerizing technique. Through implementation and testing on various problems, these techniques have proven to be successful
    corecore