85 research outputs found

    A Survey on Array Storage, Query Languages, and Systems

    Full text link
    Since scientific investigation is one of the most important providers of massive amounts of ordered data, there is a renewed interest in array data processing in the context of Big Data. To the best of our knowledge, a unified resource that summarizes and analyzes array processing research over its long existence is currently missing. In this survey, we provide a guide for past, present, and future research in array processing. The survey is organized along three main topics. Array storage discusses all the aspects related to array partitioning into chunks. The identification of a reduced set of array operators to form the foundation for an array query language is analyzed across multiple such proposals. Lastly, we survey real systems for array processing. The result is a thorough survey on array data storage and processing that should be consulted by anyone interested in this research topic, independent of experience level. The survey is not complete though. We greatly appreciate pointers towards any work we might have forgotten to mention.Comment: 44 page

    Edge Intelligence Simulator:a platform for simulating intelligent edge orchestration solutions

    Get PDF
    Abstract. To support the stringent requirements of the future intelligent and interactive applications, intelligence needs to become an essential part of the resource management in the edge environment. Developing intelligent orchestration solutions is a challenging and arduous task, where the evaluation and comparison of the proposed solution is a focal point. Simulation is commonly used to evaluate and compare proposed solutions. However, there does not currently exist openly available simulators that would have a specific focus on supporting the research on intelligent edge orchestration methods. This thesis presents a simulation platform called Edge Intelligence Simulator (EISim), the purpose of which is to facilitate the research on intelligent edge orchestration solutions. In its current form, the platform supports simulating deep reinforcement learning based solutions and different orchestration control topologies in scenarios related to task offloading and resource pricing on edge. The platform also includes additional tools for creating simulation environments, running simulations for agent training and evaluation, and plotting results. This thesis gives a comprehensive overview of the state of the art in edge and fog simulation, orchestration, offloading, and resource pricing, which provides a basis for the design of EISim. The methods and tools that form the foundation of the current EISim implementation are also presented, along with a detailed description of the EISim architecture, default implementations, use, and additional tools. Finally, EISim with its default implementations is validated and evaluated through a large-scale simulation study with 24 simulation scenarios. The results of the simulation study verify the end-to-end performance of EISim and show its capability to produce sensible results. The results also illustrate how EISim can help the researcher in controlling and monitoring the training of intelligent agents, as well as in evaluating solutions against different control topologies.Reunaälysimulaattori : alusta älykkäiden reunalaskennan orkestrointiratkaisujen simulointiin. Tiivistelmä. Älykkäiden ratkaisujen täytyy tulla olennaiseksi osaksi reunaympäristön resurssien hallinnointia, jotta tulevaisuuden vuorovaikutteisten ja älykkäiden sovellusten suoritusta voidaan tukea tasolla, joka täyttää sovellusten tiukat suoritusvaatimukset. Älykkäiden orkestrointiratkaisujen kehitys on vaativa ja työläs prosessi, jonka keskiöön kuuluu olennaisesti menetelmien testaaminen ja vertailu muita menetelmiä vasten. Simulointia käytetään tyypillisesti menetelmien arviointiin ja vertailuun, mutta tällä hetkellä ei ole avoimesti saatavilla simulaattoreita, jotka eritoten keskittyisivät tukemaan älykkäiden reunaorkestrointiratkaisujen kehitystä. Tässä opinnäytetyössä esitellään simulaatioalusta nimeltään Edge Intelligence Simulator (EISim; Reunaälysimulaattori), jonka tarkoitus on helpottaa älykkäiden reunaorkestrointiratkaisujen tutkimusta. Nykymuodossaan se tukee vahvistusoppimispohjaisten ratkaisujen sekä erityyppisten orkestroinnin kontrollitopologioiden simulointia skenaarioissa, jotka liittyvät laskennan siirtoon ja resurssien hinnoitteluun reunaympäristössä. Alustan mukana tulee myös lisätyökaluja, joita voi käyttää simulaatioympäristöjen luomiseen, simulaatioiden ajamiseen agenttien koulutusta ja arviointia varten, sekä simulaatiotulosten visualisoimiseen. Tämä opinnäytetyö sisältää kattavan katsauksen reunaympäristön simuloinnin, reunaorkestroinnin, laskennan siirron ja resurssien hinnoittelun nykytilaan kirjallisuudessa, mikä tarjoaa kunnollisen lähtökohdan EISimin toteutukselle. Opinnäytetyö esittelee menetelmät ja työkalut, joihin EISimin tämänhetkinen toteutus perustuu, sekä antaa yksityiskohtaisen kuvauksen EISimin arkkitehtuurista, oletustoteutuksista, käytöstä ja lisätyökaluista. EISimin validointia ja arviointia varten esitellään laaja simulaatiotutkimus, jossa EISimin oletustoteutuksia simuloidaan 24 simulaatioskenaariossa. Simulaatiotutkimuksen tulokset todentavat EISimin kokonaisvaltaisen toimintakyvyn, sekä osoittavat EISimin kyvyn tuottaa järkeviä tuloksia. Tulokset myös havainnollistavat, miten EISim voi auttaa tutkijoita älykkäiden agenttien koulutuksessa ja ratkaisujen arvioinnissa eri kontrollitopologioita vasten

    Strategic and operational services for workload management in the cloud

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts' resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework

    Parallel computing 2011, ParCo 2011: book of abstracts

    Get PDF
    This book contains the abstracts of the presentations at the conference Parallel Computing 2011, 30 August - 2 September 2011, Ghent, Belgiu

    Energy Awareness and Scheduling in Mobile Devices and High End Computing

    Get PDF
    In the context of the big picture as energy demands rise due to growing economies and growing populations, there will be greater emphasis on sustainable supply, conservation, and efficient usage of this vital resource. Even at a smaller level, the need for minimizing energy consumption continues to be compelling in embedded, mobile, and server systems such as handheld devices, robots, spaceships, laptops, cluster servers, sensors, etc. This is due to the direct impact of constrained energy sources such as battery size and weight, as well as cooling expenses in cluster-based systems to reduce heat dissipation. Energy management therefore plays a paramount role in not only hardware design but also in user-application, middleware and operating system design. At a higher level Datacenters are sprouting everywhere due to the exponential growth of Big Data in every aspect of human life, the buzz word these days is Cloud computing. This dissertation, focuses on techniques, specifically algorithmic ones to scale down energy needs whenever the system performance can be relaxed. We examine the significance and relevance of this research and develop a methodology to study this phenomenon. Specifically, the research will study energy-aware resource reservations algorithms to satisfy both performance needs and energy constraints. Many energy management schemes focus on a single resource that is dedicated to real-time or nonreal-time processing. Unfortunately, in many practical systems the combination of hard and soft real-time periodic tasks, a-periodic real-time tasks, interactive tasks and batch tasks must be supported. Each task may also require access to multiple resources. Therefore, this research will tackle the NP-hard problem of providing timely and simultaneous access to multiple resources by the use of practical abstractions and near optimal heuristics aided by cooperative scheduling. We provide an elegant EAS model which works across the spectrum which uses a run-profile based approach to scheduling. We apply this model to significant applications such as BLAT and Assembly of gene sequences in the Bioinformatics domain. We also provide a simulation for extending this model to cloud computing to answers “what if” scenario questions for consumers and operators of cloud resources to help answers questions of deadlines, single v/s distributed cluster use and impact analysis of energy-index and availability against revenue and ROI

    Strategic and operational services for workload management in the cloud (PhD thesis)

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts’ resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework. (Major Advisor: Azer Bestavros

    Veröffentlichungen und Vorträge 2009 der Mitglieder der Fakultät für Informatik

    Get PDF

    Resource provisioning and scheduling algorithms for hybrid workflows in edge cloud computing

    Get PDF
    In recent years, Internet of Things (IoT) technology has been involved in a wide range of application domains to provide real-time monitoring, tracking and analysis services. The worldwide number of IoT-connected devices is projected to increase to 43 billion by 2023, and IoT technologies are expected to engaged in 25% of business sector. Latency-sensitive applications in scope of intelligent video surveillance, smart home, autonomous vehicle, augmented reality, are all emergent research directions in industry and academia. These applications are required connecting large number of sensing devices to attain the desired level of service quality for decision accuracy in a sensitive timely manner. Moreover, continuous data stream imposes processing large amounts of data, which adds a huge overhead on computing and network resources. Thus, latency-sensitive and resource-intensive applications introduce new challenges for current computing models, i.e, batch and stream. In this thesis, we refer to the integrated application model of stream and batch applications as a hybrid work ow model. The main challenge of the hybrid model is achieving the quality of service (QoS) requirements of the two computation systems. This thesis provides a systemic and detailed modeling for hybrid workflows which describes the internal structure of each application type for purposes of resource estimation, model systems tuning, and cost modeling. For optimizing the execution of hybrid workflows, this thesis proposes algorithms, techniques and frameworks to serve resource provisioning and task scheduling on various computing systems including cloud, edge cloud and cooperative edge cloud. Overall, experimental results provided in this thesis demonstrated strong evidences on the responsibility of proposing different understanding and vision on the applications of integrating stream and batch applications, and how edge computing and other emergent technologies like 5G networks and IoT will contribute on more sophisticated and intelligent solutions in many life disciplines for more safe, secure, healthy, smart and sustainable society

    Quantum-centric Supercomputing for Materials Science: A Perspective on Challenges and Future Directions

    Full text link
    Computational models are an essential tool for the design, characterization, and discovery of novel materials. Hard computational tasks in materials science stretch the limits of existing high-performance supercomputing centers, consuming much of their simulation, analysis, and data resources. Quantum computing, on the other hand, is an emerging technology with the potential to accelerate many of the computational tasks needed for materials science. In order to do that, the quantum technology must interact with conventional high-performance computing in several ways: approximate results validation, identification of hard problems, and synergies in quantum-centric supercomputing. In this paper, we provide a perspective on how quantum-centric supercomputing can help address critical computational problems in materials science, the challenges to face in order to solve representative use cases, and new suggested directions.Comment: 60 pages, 14 figures; comments welcom
    corecore