38 research outputs found

    A Multiobjective Computation Offloading Algorithm for Mobile Edge Computing

    Get PDF
    In mobile edge computing (MEC), smart mobile devices (SMDs) with limited computation resources and battery lifetime can offload their computing-intensive tasks to MEC servers, thus to enhance the computing capability and reduce the energy consumption of SMDs. Nevertheless, offloading tasks to the edge incurs additional transmission time and thus higher execution delay. This paper studies the trade-off between the completion time of applications and the energy consumption of SMDs in MEC networks. The problem is formulated as a multiobjective computation offloading problem (MCOP), where the task precedence, i.e. ordering of tasks in SMD applications, is introduced as a new constraint in the MCOP. An improved multiobjective evolutionary algorithm based on decomposition (MOEA/D) with two performance enhancing schemes is proposed.1) The problem-specific population initialization scheme uses a latency-based execution location initialization method to initialize the execution location (i.e. either local SMD or MEC server) for each task. 2) The dynamic voltage and frequency scaling based energy conservation scheme helps to decrease the energy consumption without increasing the completion time of applications. The simulation results clearly demonstrate that the proposed algorithm outperforms a number of state-of-the-art heuristics and meta-heuristics in terms of the convergence and diversity of the obtained nondominated solutions

    The 6th Conference of PhD Students in Computer Science

    Get PDF

    Development of a Multi-Objective Scheduling System for Complex Job Shops in a Manufacturing Environment

    Get PDF
    In many sectors of commercial operation, the scheduling of workflows and the allocation of resources at an optimum time is critical; for effective and efficient operation. The high degree of complexity of a “Job Shop” manufacturing environment, with sequencing of many parallel orders, and allocation of resources within multi-objective operational criteria, has been subject to several research studies. In this thesis, a scheduling system for optimizing multi-objective job shop scheduling problems was developed in order to satisfy different production system requirements. The developed system incorporated three different factors; setup times, alternative machines and release dates, into one model. These three factors were considered after a survey study of multiobjective job shop scheduling problems. In order to solve the multi-objective job shop scheduling problems, a combination of genetic algorithm and a modified version of a very recent and computationally efficient approach to non-dominated sorting solutions, called “efficient non-dominated sort using the backward pass sequential strategy”, was applied. In the proposed genetic algorithm, an operation based representation was designed in the matrix form, which can preserve features of the parent after the crossover operator without repairing the solution. The proposed efficient non-dominated sort using the backward pass sequential strategy was employed to determine the front, to which each solution belongs. The proposed system was tested and validated with 20 benchmark problems after they have been modified. The experimental results show that the proposed system was effective and efficient to solve multi-objective job shop scheduling problems in terms of solution quality

    Software Defined Resource Allocation for Attaining QoS and QoE Guarantees at the Wireless Edge

    Get PDF
    Wireless Internet access has brought legions of heterogeneous applications all sharing the same resources. However, current wireless edge networks that provide Quality of Service (QoS) guar-antees that only cater to worst or average case performance lack the agility to best serve these diverse sessions. Simultaneously, software reconfigurable infrastructure has become increasingly mainstream to the point that dynamic per packet and per flow decisions are possible at multiple layers of the communications stack. In this dissertation, we explore several problems in the space of cross-layer optimization of reconfigurable infrastructure with the objective of maximizing user-perceived Quality of Experience (QoE) under the resource constraints of the Wireless Edge. We first model the adaptive reconfiguration of system infrastructure as a Markov Decision Pro-cess with a goal of satisfying application requirements, and whose transition kernel is discovered using a reinforcement learning approach. Our context is that of reconfigurable (priority) queueing, and we use the popular application of video streaming as our use case. Self declaration of states by all participating applications is necessary for the success of the approach. This need motivates us to design an open market-based system which promotes the truthful declaration of value (state). We show in an experimental setup that the benefits of such an approach are similar to those of the learning approach. Implementations of these techniques are conducted on off-the-shelf hardware, which have inherent restrictions on reconfigurability across different layers of the network stack. Consequently, we exploit a custom hardware platform to achieve finer grained reconfiguration capabilities like per packet scheduling and develop a platform for implementation and testing of scheduling protocols with ultra-low latency requirements. Finally, we study a distributed approach for satisfying strict application requirements by leveraging end user devices interested in a shared objective. Such a system enables us to attain the necessary performance goals with minimal use of centralized infrastructure

    Intelligent cooperative caching at mobile edge based on offline deep reinforcement learning

    Get PDF
    This is the author accepted manuscript. The final version is available from the Association for Computing Machinery via the DOI in this record Cooperative edge caching enables edge servers to jointly utilize their cache to store popular contents, thus drastically reducing the latency of content acquisition. One fundamental problem of cooperative caching is how to coordinate the cache replacement decisions at edge servers to meet users’ dynamic requirements and avoid caching redundant contents. Online deep reinforcement learning (DRL) is a promising way to solve this problem by learning a cooperative cache replacement policy using continuous interactions (trial and error) with the environment. However, the sampling process of the interactions is usually expensive and time-consuming, thus hindering the practical deployment of online DRL-based methods. To bridge this gap, we propose a novel Delay-awarE Cooperative cache replacement method based on Offline deep Reinforcement learning (DECOR), which can exploit the existing data at the mobile edge to train an effective policy while avoiding expensive data sampling in the environment. A specific convolutional neural network is also developed to improve the training efficiency and cache performance. Experimental results show that DECOR can learn a superior offline policy from a static dataset compared to an advanced online DRL-based method. Moreover, the learned offline policy outperforms the behavior policy used to collect the dataset by up to 35.9%.UK Research and InnovationEuropean Union’s Horizon 202

    Edge-Facilitated Mobile Computing and Communication

    Get PDF
    The proliferation of IoT devices and rapidly developing wireless techniques boost the data volume and service demand at the edge of the Internet. Meanwhile, increased requirement for low latency feedback has become a must for most popular mobile applications, e.g., Augmented Reality (AR), Virtual Reality (VR) and Connected Vehicles. To address these challenges, edge computing has emerged as an extensional solution for cloud computing. This thesis studies edge computing-facilitated mobile computing and communication systems. We first propose solutions to improve edge resource utilization regarding general edge systems. We present a mechanism to cluster user requests based on similarity for better Content Delivery Net- work (CDN) performance. This mechanism works directly on current CDN architecture and can be deployed incrementally. Then we extend the mechanism by adding cache resource grouping algorithm, so that the system directs similar requests to same servers and group those servers which receive similar requests. This iterative mechanism optimizes the edge utilization by concentrating the resource on similar requests to achieve higher cache hit ratio and computation efficiency. Thereafter, we present solutions for mobile edge systems specifically for three most promising use cases, i.e., Connected Vehicles, Mobile AR (MAR) and Smart city (traffic control). We explore the potential of edge computing in connected vehicular AR applications with real data sets. We design a lightweight edge system and data flow fit for general connected vehicular AR applications and implement a prototype. With an indoor test and real data set analysis, we find out that our system can improve the performance of vehicular AR applications with reasonable cost. To optimize the system, we formulate the problem of edge server allocation and task scheduling as a mutant multiprocessor scheduling problem and develop a two-stage edge-cloud decentralized algorithm as well as a centralized algorithm to schedule the offloading tasks on the fly. We conduct a raw road test and an extensive evaluation based on the road test results and large data sets from real world. The results show that our system improve at least twice the application performance comparing with cloud solutions. For MAR, we consider to offload tasks to multiple edge servers via multiple paths simultaneously to further improve the MAR performance. We develop a fast scheduling algorithm to split the workloads among the avail- able edge servers and show promising results with real implementations. At last, we explore the potential of combining edge computing and ma- chine learning techniques to realize intelligent traffic control by letting edge servers co-located with traffic lights learn the waiting traffic and adapt the light periods with reinforcement learning.Esineiden Internetin leviäminen ja nopeasti kehittyvät langattomat tekniikat lisäävät datan määrää ja palvelutarvetta Internetin reunalla. Samanaikaisesti lisääntyneestä alhaisen viiveen palautteen vaatimuksesta on tullut välttämätön suosituimpiin mobiilisovelluksiin, esim. lisättyyn todellisuuteen (AR), virtuaalitodellisuuteen (VR) ja yhdistettyihin ajoneuvoihin. Reunalaskenta on noussut pilvilaskennan rinnalle näihin haasteisiin vastaavaksi ratkaisuksi. Tässä väitöskirjassa tutkitaan laskennallisesti laajennettuja mobiililaskenta- ja viestintäjärjestelmiä. Ehdotamme ensin ratkaisuja reunaresurssien käytön parantamiseksi yleisten reunajärjestelmien suhteen. Esitämme mekanismin käyttäjien pyyntöjen klusterointiin perustuen samankaltaisuuteen sisällönjakeluverkon (CDN) suorituskyvyn parantamiseksi. Tämä mekanismi toimii suoraan nykyisessä CDN-arkkitehtuureissa ja voidaan ottaa käyttöön asteittain. Sitten laajennamme mekanismia lisäämällä välimuistiresurssien ryhmittelyalgoritmin siten, että järjestelmä ohjaa samankaltaiset pyynnöt samoille palvelimille ja ryhmittelee palvelimet pyyntöjen mukaan. Tämä iteratiivinen mekanismi optimoi reunakäytön keskittämällä resurssit samanlaisiin pyyntöihin suuremman välimuistin osumissuhteen ja laskentatehokkuuden saavuttamiseksi. Sen jälkeen esittelemme ratkaisuja liikkuviin reunajärjestelmiin erityisesti kolmeen lupaavimpaan käyttötapaukseen, ts. yhdistetyt ajoneuvot, laajennettu mobiilitodellisuus (MAR) ja älykäs kaupunki (erityisesti liikenteenohjaus). Tutkimme reunalaskennan mahdollisuuksia yhdistettyjen ajoneuvojen AR-sovelluksissa. Suunnittelemme kevyen reunajärjestelmän ja tiedonkulun, joka sopii yleisesti yhdistettyjen ajoneuvojen AR-sovelluksiin ja toteutamme prototyypin. Sisätilojen testin ja reaalimaailman datan avulla saamme selville, että järjestelmämme voi parantaa ajoneuvojen AR-sovellusten suorituskykyä kohtuullisin kustannuksin. Järjestelmän optimoimiseksi formuloimme reunapalvelimien allokoinnin ja tehtävien ajoituksen ongelman muuttuvana moniprosessorien skedulointiongelmana ja kehitämme kaksivaiheisen reunapilviin soveltuvan hajautetun algoritmin sekä keskitetyn algoritmin kuormansiirtotehtävien ajonaikaiseen ajoittamiseen. Suoritamme kokeellisen testin oikeassa ajossa ja datapohjaisen arvioinnin, joka perustuu tietestien tuloksiin ja todellisen maailman suuriin tietojoukkoihin. Tulokset osoittavat, että järjestelmämme parantaa merkittävästi sovelluksen suorituskykyä verrattuna pilviratkaisuihin. MAR:n osalta käsittelemme tehtävien lataamista useille reunapalvelimille useiden reittien kautta samanaikaisesti MAR:n suorituskyvyn parantamiseksi. Kehitämme nopean aikataulutusalgoritmin työkuormien jakamiseen käytettävissä olevien reunapalvelimien. Lopuksi tutkimme mahdollisuuksia yhdistää reunalaskenta ja koneoppimistekniikat älykkään liikennevalo-ohjauksen toteuttamiseksi liikennevaloihin sijoitetuilla reunapalvelimilla

    Open Platforms for Connected Vehicles

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Optimal Experimental Design Applied to Models of Microbial Gene Regulation

    Get PDF
    Microbial gene expression is a comparatively well understood process, but regulatory interactions between genes can give rise to complicated behaviours. Regulatory networks can exhibit strong context dependence, time-varying interactions and multiple equilibrium. The qualitative diagrammatic models often used in biology are not well suited to reasoning about such intricate dynamics. Fortunately, mathematics offers a natural language to model gene regulation because it can quantify the various system inter-dependencies with much greater clarity and precision. This added clarity makes models of microbial gene regulation a valuable tool for studying both natural and synthetic gene regulatory systems. However models are only as good as the knowledge and assumptions they are built on. Specifically, all models depend on unknown parameters -- constant that quantify specific rates and interaction strengths within the regulatory system. In systems biology parameters are generally fit, rather than measured directly, because their values are contextually dependent on state of the microbial host. This fitting requires collecting observations of the modeled system. Exactly what is measured, how many times and under what experimental conditions defines an experimental design. The experimental design is intimately linked to the accuracy of any resulting parameter estimates for a model, but determining what experimental design will be useful for fitting can be difficult. Optimal experimental design (OED) provides a set of statistical techniques that can be used make design choices that improve parameter estimation accuracy. In this thesis I examine the use of OED methods applied to models of microbial gene regulation. I have specifically focused on optimal design methods that combine asymptotic parametric accuracy objectives, based on the Fisher information matrix, with relaxed formulations of the design optimization problem. I have applied these OED methods to three biological case studies. (1) I have used these methods to implement a multiple-shooting optimal control algorithm for optimal design of dynamic experiments. This algorithm was applied to a novel model of transcriptional regulation that accounts for the microbial host's physiological context. Optimal experiments were derived for estimating sequence-specific regulatory parameters and host-specific physiological parameters. (2) I have used OED methods to formulate an optimal sample scheduling algorithm for dynamic induction experiments. This algorithm was applied to a model of an optogenetic induction system -- an important tool for dynamic gene expression studies. The value of sampling schedules within dynamic experiments was examined by comparing optimal and naive schedules. (3) I derived an optimal experimental procedure for fitting a steady-state model of single-cell observations from a bistable regulatory motif. This system included a stochastic model of gene expression and the OED methods made use of the linear noise approximation to derive a tractable design algorithm. In addition to these case-studies, I also introduce the NLOED software package. The package can perform optimal design and a number of other fitting and diagnostic procedures on both static and dynamic multi-input multi-output models. The package makes use automatic differentiation for efficient computation, offers a flexible modeling interface, and will make OED more accessible to the wider biological community. Overall, the main contributions of this thesis include: developing novel OED methods for a variety of gene regulatory scenarios, studying optimal experimental design properties for these scenarios, and implementing open-source numerical software for a variety of OED problems in systems biology

    Control and Analysis for Sequential Information based on Machine Learning

    Get PDF
    Sequential information is crucial for real-world applications that are related to time, which is same with time-series being described by sequence data followed by temporal order and regular intervals. In this thesis, we consider four major tasks of sequential information that include sequential trend prediction, control strategy optimisation, visual-temporal interpolation and visual-semantic sequential alignment. We develop machine learning theories and provide state-of-the-art models for various real-world applications that involve sequential processes, including the industrial batch process, sequential video inpainting, and sequential visual-semantic image captioning. The ultimate goal is about designing a hybrid framework that can unify diverse sequential information analysis and control systems For industrial process, control algorithms rely on simulations to find the optimal control strategy. However, few machine learning techniques can control the process using raw data, although some works use ML to predict trends. Most control methods rely on amounts of previous experiences, and cannot execute future information to optimize the control strategy. To improve the effectiveness of the industrial process, we propose improved reinforcement learning approaches that can modify the control strategy. We also propose a hybrid reinforcement virtual learning approach to optimise the long-term control strategy. This approach creates a virtual space that interacts with reinforcement learning to predict a virtual strategy without conducting any real experiments, thereby improving and optimising control efficiency. For sequential visual information analysis, we propose a dual-fusion transformer model to tackle the sequential visual-temporal encoding in video inpainting tasks. Our framework includes a flow-guided transformer with dual attention fusion, and we observe that the sequential information is effectively processed, resulting in promising inpainting videos. Finally, we propose a cycle-based captioning model for the analysis of sequential visual-semantic information. This model augments data from two views to optimise caption generation from an image, overcoming new few-shot and zero-shot settings. The proposed model can generate more accurate and informative captions by leveraging sequential visual-semantic information. Overall, the thesis contributes to analysing and manipulating sequential information in multi-modal real-world applications. Our flexible framework design provides a unified theoretical foundation to deploy sequential information systems in distinctive application domains. Considering the diversity of challenges addressed in this thesis, we believe our technique paves the pathway towards versatile AI in the new era

    The 1991 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications
    corecore