1,509 research outputs found

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    Open Source Solutions for Building IaaS Clouds

    Get PDF
    Cloud Computing is not only a pool of resources and services offered through the internet, but also a technology solution that allows optimization of resources use, costs minimization and energy consumption reduction. Enterprises moving towards cloud technologies have to choose between public cloud services, such as: Amazon Web Services, Microsoft Cloud and Google Cloud services, or private self built clouds. While the firsts are offered with affordable fees, the others provide more privacy and control. In this context, many open source softwares approach the buiding of private, public or hybrid clouds depending on the users need and on the available capabilities. To choose among the different open source solutions, an analysis is necessary in order to select the most suitable according with the enterprise’s goals and requirements. In this paper, we present a depth study and comparison of five open source frameworks that are gaining more attention recently and growing fast: CloudStack, OpenStack, Eucalyptus, OpenNebula and Nimbus. We present their architectures and discuss different properties, features, useful information and our own insights on these frameworks

    Towards Lightweight Data Integration using Multi-workflow Provenance and Data Observability

    Full text link
    Modern large-scale scientific discovery requires multidisciplinary collaboration across diverse computing facilities, including High Performance Computing (HPC) machines and the Edge-to-Cloud continuum. Integrated data analysis plays a crucial role in scientific discovery, especially in the current AI era, by enabling Responsible AI development, FAIR, Reproducibility, and User Steering. However, the heterogeneous nature of science poses challenges such as dealing with multiple supporting tools, cross-facility environments, and efficient HPC execution. Building on data observability, adapter system design, and provenance, we propose MIDA: an approach for lightweight runtime Multi-workflow Integrated Data Analysis. MIDA defines data observability strategies and adaptability methods for various parallel systems and machine learning tools. With observability, it intercepts the dataflows in the background without requiring instrumentation while integrating domain, provenance, and telemetry data at runtime into a unified database ready for user steering queries. We conduct experiments showing end-to-end multi-workflow analysis integrating data from Dask and MLFlow in a real distributed deep learning use case for materials science that runs on multiple environments with up to 276 GPUs in parallel. We show near-zero overhead running up to 100,000 tasks on 1,680 CPU cores on the Summit supercomputer.Comment: 10 pages, 5 figures, 2 Listings, 42 references, Paper accepted at IEEE eScience'2

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Design considerations for workflow management systems use in production genomics research and the clinic

    Get PDF
    Abstract The changing landscape of genomics research and clinical practice has created a need for computational pipelines capable of efficiently orchestrating complex analysis stages while handling large volumes of data across heterogeneous computational environments. Workflow Management Systems (WfMSs) are the software components employed to fill this gap. This work provides an approach and systematic evaluation of key features of popular bioinformatics WfMSs in use today: Nextflow, CWL, and WDL and some of their executors, along with Swift/T, a workflow manager commonly used in high-scale physics applications. We employed two use cases: a variant-calling genomic pipeline and a scalability-testing framework, where both were run locally, on an HPC cluster, and in the cloud. This allowed for evaluation of those four WfMSs in terms of language expressiveness, modularity, scalability, robustness, reproducibility, interoperability, ease of development, along with adoption and usage in research labs and healthcare settings. This article is trying to answer, which WfMS should be chosen for a given bioinformatics application regardless of analysis type?. The choice of a given WfMS is a function of both its intrinsic language and engine features. Within bioinformatics, where analysts are a mix of dry and wet lab scientists, the choice is also governed by collaborations and adoption within large consortia and technical support provided by the WfMS team/community. As the community and its needs continue to evolve along with computational infrastructure, WfMSs will also evolve, especially those with permissive licenses that allow commercial use. In much the same way as the dataflow paradigm and containerization are now well understood to be very useful in bioinformatics applications, we will continue to see innovations of tools and utilities for other purposes, like big data technologies, interoperability, and provenance

    迅速な災害管理のための即時的,持続可能,かつ拡張的なエッジコンピューティングの研究

    Get PDF
    本学位論文は、迅速な災害管理におけるいくつかの問題に取り組んだ。既存のネットワークインフラが災害による直接的なダメージや停電によって使えないことを想定し、本論文では、最新のICTを用いた次世代災害支援システムの構築を目指す。以下のとおり本論文は三部で構成される。第一部は、災害発生後の緊急ネットワーキングである。本論文では、情報指向フォグコンピューティング(Information-Centric Fog Computing)というアーキテクチャを提案し、既存のインフラがダウンした場合に臨時的なネットワーク接続を提供する。本論文では、六次の隔たり理論から着想を得て、緊急時向け名前ベースルーティング(Name-Based Routing)を考慮した。まず、二層の情報指向フォグコンピューティングネットワークモデルを提案した。次に、ソーシャルネットワークを元に、情報指向フォグノード間の関係をモデリングし、名前ベースルーティングプロトコルをデザインする。シミュレーション実験では、既存のソリューションと比較し、提案手法はより高い性能を示し、有用性が証明された。第二部は、ネットワークの通信効率の最適化である。本論文は、第一部で構築されたネットワークの通信効率を最適化し、ネットワークの持続時間を延ばすために、ネットワークのエッジで行われるキャッシングストラテジーを提案した。本論文では、まず、第一部で提案した二層ネットワークモデルをベースにサーバー層も加えて、異種ネットワークストラクチャーを構成した。次に、緊急時向けのエッジキャッシングに必要なTime to Live (TTL)とキャッシュ置換ポリシーを設計する。シミュレーション実験では、エネルギー消費とバックホールレートを性能指標とし、メモリ内キャッシュとディスクキャッシュの性能を比較した。結果では、メモリ内ストレージと処理がエッジキャッシングのエネルギーを節約し、かなりのワークロードを共有できることが示された。第三部は、ネットワークカバレッジの拡大である。本論文は、ドローンの関連技術とリアルタイム視覚認識技術を利用し、被災地のユーザ捜索とドローンの空中ナビゲーションを行う。災害管理におけるドローン制御に関する研究を調査し、現在のドローン技術と無人捜索救助に対する実際のニーズを考慮すると、軽量なソリューションが緊急時に必要であることが判明した。そのため本論文では、転移学習を利用し、ドローンに搭載されたオンボードコンピュータで実行可能な空中ビジョンに基づいたナビゲーションアプローチを開発した。シミュレーション実験では、1/150ミニチュアモデルを用いて、空中ナビゲーションの実行可能性をテストした。結果では、本論文で提案するドローンの軽量ナビゲーションはフィードバックに基づいてリアルタイムに飛行の微調整を実現でき、既存手法と比較して性能において大きな進歩を示した。This dissertation mainly focuses on solving the problems in agile disaster management. To face the situation when the original network infrastructure no longer works because of disaster damage or power outage, I come up with the idea of introducing different emerging technologies in building a next-generation disaster response system. There are three parts of my research. In the first part of emergency networking, I design an information-centric fog computing architecture to fast build a temporary emergency network while the original ones can not be used. I focus on solving name-based routing for disaster relief by applying the idea from six degrees of separation theory. I first put forward a 2-tier information-centric fog network architecture under the scenario of post-disaster. Then I model the relationships among ICN nodes based on delivered files and propose a name-based routing strategy to enable fast networking and emergency communication. I compare with DNRP under the same experimental settings and prove that my strategy can achieve higher work performance. In the second part of efficiency optimization, I introduce the idea of edge caching in prolong the lifetime of the rebuilt network. I focus on how to improve the energy efficiency of edge caching using in-memory storage and processing. Here I build a 3-tier heterogeneous network structure and propose two edge caching methods using different TTL designs & cache replacement policies. I use total energy consumption and backhaul rate as the two metrics to test the performance of the in-memory caching method and compare it with the conventional method based on disk storage. The simulation results show that in-memory storage and processing can help save more energy in edge caching and share a considerable workload in percentage. In the third part of coverage expansion, I apply UAV technology and real-time image recognition in user search and autonomous navigation. I focus on the problem of designing a navigation strategy based on the airborne vision for UAV disaster relief. After the survey of related works on UAV fly control in disaster management, I find that in consideration of the current UAV manufacturing technology and actual demand on unmanned search & rescue, a lightweight solution is in urgent need. As a result, I design a lightweight navigation strategy based on visual recognition using transfer learning. In the simulation, I evaluate my solutions using 1/150 miniature models and test the feasibility of the navigation strategy. The results show that my design on visual recognition has the potential for a breakthrough in performance and the idea of UAV lightweight navigation can realize real-time flight adjustment based on feedback.室蘭工業大学 (Muroran Institute of Technology)博士(工学
    corecore