3,775 research outputs found

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    Advances in Dynamic Virtualized Cloud Management

    Get PDF
    Cloud computing continues to gain in popularity, with more and more applications being deployed into public and private clouds. Deploying an application in the cloud allows application owners to provision computing resources on-demand, and scale quickly to meet demand. An Infrastructure as a Service (IaaS) cloud provides low-level resources, in the form of virtual machines (VMs), to clients on a pay-per-use basis. The cloud provider (owner) can reduce costs by lowering power consumption. As a typical server can consume 50% or more of its peak power consumption when idle, this can be accomplished by consolidating client VMs onto as few hosts (servers) as possible. This, however, can lead to resource contention, and degraded VM performance. As such, VM placements must be dynamically adapted to meet changing workload demands. We refer to this process as dynamic management. Clients should also take advantage of the cloud environment by scaling their applications up and down (adding and removing VMs) to match current workload demands. This thesis proposes a number of contributions to the field of dynamic cloud management. First, we propose a method of dynamically switching between management strategies at run-time in order to achieve more than one management goal. In order to increase the scalability of dynamic management algorithms, we introduce a distributed version of our management algorithm. We then consider deploying applications which consist of multiple VMs, and automatically scale their deployment to match their workload. We present an integrated management algorithm which handles both dynamic management and application scaling. When dealing with multi-VM applications, the placement of communicating VMs within the data centre topology should be taken into account. To address this consideration, we propose a topology-aware version of our dynamic management algorithm. Finally, we describe a simulation tool, DCSim, which we have developed to help evaluate dynamic management algorithms and techniques

    A Case Study of Edge Computing Implementations: Multi-access Edge Computing, Fog Computing and Cloudlet

    Get PDF
    With the explosive growth of intelligent and mobile devices, the current centralized cloud computing paradigm is encountering difficult challenges. Since the primary requirements have shifted towards implementing real-time response and supporting context awareness and mobility, there is an urgent need to bring resources and functions of centralized clouds to the edge of networks, which has led to the emergence of the edge computing paradigm. Edge computing increases the responsibilities of network edges by hosting computation and services, therefore enhancing performances and improving quality of experience (QoE). Fog computing, multi-access edge computing (MEC), and cloudlet are three typical and promising implementations of edge computing. Fog computing aims to build a system that enables cloud-to-thing service connectivity and works in concert with clouds, MEC is seen as a key technology of the fifth generation (5G) system, and Cloudlet is a micro-data center deployed in close proximity. In terms of deployment scenarios, Fog computing focuses on the Internet of Things (IoT), MEC mainly provides mobile RAN application solutions for 5G systems, and cloudlet offloads computing power at the network edge. In this paper, we present a comprehensive case study on these three edge computing implementations, including their architectures, differences, and their respective application scenario in IoT, 5G wireless systems, and smart edge. We discuss the requirements, benefits, and mechanisms of typical co-deployment cases for each paradigm and identify challenges and future directions in edge computing

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability

    Performance Comparison of Dual Connectivity and Hard Handover for LTE-5G Tight Integration in mmWave Cellular Networks

    Get PDF
    MmWave communications are expected to play a major role in the Fifth generation of mobile networks. They offer a potential multi-gigabit throughput and an ultra-low radio latency, but at the same time suffer from high isotropic pathloss, and a coverage area much smaller than the one of LTE macrocells. In order to address these issues, highly directional beamforming and a very high-density deployment of mmWave base stations were proposed. This Thesis aims to improve the reliability and performance of the 5G network by studying its tight and seamless integration with the current LTE cellular network. In particular, the LTE base stations can provide a coverage layer for 5G mobile terminals, because they operate on microWave frequencies, which are less sensitive to blockage and have a lower pathloss. This document is a copy of the Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr. Marco Mezzavilla and Prof. Michele Zorzi. It will propose an LTE-5G tight integration architecture, based on mobile terminals' dual connectivity to LTE and 5G radio access networks, and will evaluate which are the new network procedures that will be needed to support it. Moreover, this new architecture will be implemented in the ns-3 simulator, and a thorough simulation campaign will be conducted in order to evaluate its performance, with respect to the baseline of handover between LTE and 5G.Comment: Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr. Marco Mezzavilla and Prof. Michele Zorz

    Globally Optimal Energy-Efficient Power Control and Receiver Design in Wireless Networks

    Full text link
    The characterization of the global maximum of energy efficiency (EE) problems in wireless networks is a challenging problem due to the non-convex nature of investigated problems in interference channels. The aim of this work is to develop a new and general framework to achieve globally optimal solutions. First, the hidden monotonic structure of the most common EE maximization problems is exploited jointly with fractional programming theory to obtain globally optimal solutions with exponential complexity in the number of network links. To overcome this issue, we also propose a framework to compute suboptimal power control strategies characterized by affordable complexity. This is achieved by merging fractional programming and sequential optimization. The proposed monotonic framework is used to shed light on the ultimate performance of wireless networks in terms of EE and also to benchmark the performance of the lower-complexity framework based on sequential programming. Numerical evidence is provided to show that the sequential fractional programming framework achieves global optimality in several practical communication scenarios.Comment: Accepted for publication in the IEEE Transactions on Signal Processin
    corecore