44 research outputs found

    AirEdge: A Dependency-Aware Multi-Task Orchestration in Federated Aerial Computing

    Get PDF
    Emerging edge computing (EC) systems are currently exploiting attaching portable edge devices on drones for data processing close to the sources, to achieve high performance, fast response times and real-time insights. To this end, existing EC research has proposed several multiple drone-based edge deployments for various purposes, such as data caching, task offloading, real-time video analytics, and computer vision. However, none of them consider the ability of seamlessly integrating edge resources running across multiple drones in a single pool, to holistically manage and control these resources as well as to eliminate vendor lock-in situations. This paper presents an intelligent resource scheduling solution for a federated aerial EC system, called AirEdge, which jointly considers task dependencies, heterogeneous resource demand and drones’ flight time. We propose a multi-task execution time estimation and a dispatching policy, to select the closest drone deployment having congruent flight time and resource availability to execute ready tasks at any given time. For the utilization of the drones’ attached edge resources, we propose a variant bin-packing optimization approach through gangscheduling of multi-dependent tasks that co-locates tasks tightly on nodes to fully utilize available resources. Experiments on realworld data-trace from Alibaba cluster trace with information on task dependencies (about 12,207,703 dependencies) and resource demands show the effectiveness, fast executions, and resource efficiency of our approac

    ThermoSim: Deep Learning based Framework for Modeling and Simulation of Thermal-aware Resource Management for Cloud Computing Environments

    Get PDF
    Current cloud computing frameworks host millions of physical servers that utilize cloud computing resources in the form of different virtual machines. Cloud Data Center (CDC) infrastructures require significant amounts of energy to deliver large scale computational services. Moreover, computing nodes generate large volumes of heat, requiring cooling units in turn to eliminate the effect of this heat. Thus, overall energy consumption of the CDC increases tremendously for servers as well as for cooling units. However, current workload allocation policies do not take into account effect on temperature and it is challenging to simulate the thermal behavior of CDCs. There is a need for a thermal-aware framework to simulate and model the behavior of nodes and measure the important performance parameters which can be affected by its temperature. In this paper, we propose a lightweight framework, ThermoSim, for modeling and simulation of thermal-aware resource management for cloud computing environments. This work presents a Recurrent Neural Network based deep learning temperature predictor for CDCs which is utilized by ThermoSim for lightweight resource management in constrained cloud environments. ThermoSim extends the CloudSim toolkit helping to analyze the performance of various key parameters such as energy consumption, service level agreement violation rate, number of virtual machine migrations and temperature during the management of cloud resources for execution of workloads. Further, different energy-aware and thermal-aware resource management techniques are tested using the proposed ThermoSim framework in order to validate it against the existing framework (Thas). The experimental results demonstrate the proposed framework is capable of modeling and simulating the thermal behavior of a CDC and ThermoSim framework is better than Thas in terms of energy consumption, cost, time, memory usage and prediction accuracy

    Intelligent Load Balancing in Cloud Computer Systems

    Get PDF
    Cloud computing is an established technology allowing users to share resources on a large scale, never before seen in IT history. A cloud system connects multiple individual servers in order to process related tasks in several environments at the same time. Clouds are typically more cost-effective than single computers of comparable computing performance. The sheer physical size of the system itself means that thousands of machines may be involved. The focus of this research was to design a strategy to dynamically allocate tasks without overloading Cloud nodes which would result in system stability being maintained at minimum cost. This research has added the following new contributions to the state of knowledge: (i) a novel taxonomy and categorisation of three classes of schedulers, namely OS-level, Cluster and Big Data, which highlight their unique evolution and underline their different objectives; (ii) an abstract model of cloud resources utilisation is specified, including multiple types of resources and consideration of task migration costs; (iii) a virtual machine live migration was experimented with in order to create a formula which estimates the network traffic generated by this process; (iv) a high-fidelity Cloud workload simulator, based on a month-long workload traces from Google's computing cells, was created; (v) two possible approaches to resource management were proposed and examined in the practical part of the manuscript: the centralised metaheuristic load balancer and the decentralised agent-based system. The project involved extensive experiments run on the University of Westminster HPC cluster, and the promising results are presented together with detailed discussions and a conclusion

    Towards Our Common Digital Future. Flagship Report.

    Get PDF
    In the report “Towards Our Common Digital Future”, the WBGU makes it clear that sustainability strategies and concepts need to be fundamentally further developed in the age of digitalization. Only if digital change and the Transformation towards Sustainability are synchronized can we succeed in advancing climate and Earth-system protection and in making social progress in human development. Without formative political action, digital change will further accelerate resource and energy consumption, and exacerbate damage to the environment and the climate. It is therefore an urgent political task to create the conditions needed to place digitalization at the service of sustainable development

    Exploring the omnichannel transformation of material-handling configurations and logistics capabilities in grocery retail

    Get PDF
    Grocery retail is going through a rapid shift. Consumers now expect to be able to shop online or in stores, get orders delivered when and where they want, and preferably as quickly as possible. This development is called omnichannel and means grocery retailers must transform their logistics networks to meet consumers’ evolving expectations and demands. The omnichannel transformation includes, for example, setting up new material handling (MH) nodes to pick online orders and investing in new automated systems. While this might sound straightforward, grocery retailers struggle to succeed with the omnichannel transformation, particularly in living up to consumers’ evolving expectations and becoming profitable. To develop theoretical and practical knowledge on this under-researched topic, this dissertation aimed to explore and understand the MH configurations and logistics capabilities needed in the omnichannel transformation of grocery retail and the dynamic capabilities required to manage such a transformation. In responding to this purpose, this dissertation makes several important contributions for researchers and practitioners who aim to understand how grocery retailers manage the omnichannel transformation and what they are doing to reconfigure MH configurations and logistics capabilities.The dissertation is based on the results of five articles from three separate but subsequent studies. The first study, a case study–inspired interview project, applied a contingency approach to explore the configurations of four manual online fulfillment centers (OFCs) in omnichannel grocery retail. The study captured key configurations, main challenges, and influential contextual factors. Study two, a multiple case study, focused on sorting in omnichannels. The study increased knowledge of sorting in omnichannels, and by combining empirical data with transvection theory, it also resulted in an artifact for analyzing and designing omnichannel sorting. The third and last study was a multiple case study of three grocery retailers and had a two-fold focus. First, this study moved beyond exploring specific aspects of the MH configurations and logistics capabilities in omnichannel grocery retail (OFC configuration and sorting) and focused on how and why grocery retailers manage the transformation by contextualizing dynamic capabilities. Second, study one revealed that investment in automation is as one key to being competitive in the omnichannel environment. Study three further explored automated online order picking systems and captured key configuration aspects, main performance objectives, and influential contextual factors. This dissertation contributes to the research by combining the findings from the three studies with literature on omnichannel logistics and MH in grocery retail, warehouse theory, and transvection theory to elaborate knowledge on what and dynamic capabilities to understand how. Moreover, a contingency approach helped investigate why grocery retailers invest in and reconfigure specific MH configurations and logistics capabilities, as well as why some grocery retailers are more successful than others with the omnichannel transformation. As a result, an elaborate and comprehensive framework arose that explains the what, how, and why of omnichannel grocery retail. The analysis and development of the framework revealed that omnichannel grocery retailers adapt their MH configurations and logistics capabilities to their external context to meet evolving customer expectations and requirements. Hence, the potential configurations and logistics capabilities that grocery retailers develop and invest in are influenced and constrained by the external context. The dynamic capabilities required to manage the omnichannel transformation could be identified by applying dynamic capabilities as a theoretical lens. The findings revealed that the identified dynamic capabilities enabling the transformation reside to a large extent on organization-level, both corporate and logistics

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Штучний інтелект

    Get PDF
    Funding: Research, preparation of materials and preparation of the textbook were carried out under the project – grant no. PPI/KAT/2019/1/00015/U/00001 "Cognitive technologies – second-cycle studies in English" and were carried under the KATAMARAN program Polish National Agency for Academic Exchange (NAWA). The program is co-financed by the European Social Fund under the Knowledge Education Development Operational Program, a non-competition project entitled "Supporting the institutional capacity of Polish universities through the creation and implementation of international study programs" implemented under Measure 3.3. Internationalization of Polish higher education, specified in the application for project funding no. POWR.03.03.00-00-PN 16/18. The project was carried out in cooperation with the Silesian University of Technology (project leader – Poland) and the Kiev National University of Construction and Architecture (project partner – Ukraine).Фінансування: Дослідження, підготовка матеріалів та підготовка підручника були здійснені в рамках проекту - грант №. PPI/KAT/2019/1/00015/U/00001 "Когнітивні технології-навчання другого циклу англійською мовою", які здійснювалися за програмою КАТАМАРАН Польське національне агентство академічного обміну (NAWA) . Програма спільно фінансується Європейським соціальним фондом у рамках програми "Знання" Оперативна програма розвитку освіти, позаконкурентний проект під назвою "Підтримка інституційної спроможності польських університетів через створення та реалізація міжнародних навчальних програм ", що реалізуються відповідно до Заходу 3.3. Інтернаціоналізація польської вищої освіти, зазначена у заявці на фінансування проекту POWR.03.03.00-00-PN 16/18. Проект здійснювався у співпраці з Сілезьким технологічним університетом (керівник проекту - Польща) та Київським національним університетом будівництва та архітектури (партнер проекту - Україна)

    Computational Methods for Medical and Cyber Security

    Get PDF
    Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields
    corecore