42 research outputs found

    Predicting Software Performance with Divide-and-Learn

    Full text link
    Predicting the performance of highly configurable software systems is the foundation for performance testing and quality assurance. To that end, recent work has been relying on machine/deep learning to model software performance. However, a crucial yet unaddressed challenge is how to cater for the sparsity inherited from the configuration landscape: the influence of configuration options (features) and the distribution of data samples are highly sparse. In this paper, we propose an approach based on the concept of 'divide-and-learn', dubbed DaLDaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction. Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaLDaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94×1.94\times improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaLDaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: https://github.com/ideas-labo/DaL.Comment: This paper has been accepted by The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), 202

    Open Infrastructure for Edge Computing

    Get PDF
    Edge computing, bringing the computation closer to end-users and data producers, has now firmly gained the status of enabling technology for the new kinds of emerging applications, such as Virtual/Augmented Reality and IoT. The motivation backing this rapidly developing computing paradigm is mainly two-fold. On the one hand, the goal is to minimize the latency that end-users experience, not only improving the quality of service but empowering new kinds of applications, which would not even be possible given higher delays. On the other, edge computing aims to save core networking bandwidth from being overwhelmed by myriads of IoT devices, sending their data to the cloud. After analyzing and aggregating IoT streams at edge servers, much less networking capacity will be required to persist remaining information in distant cloud datacenters. Having a solid motivation and experiencing continuous interest from both academia and industry, edge computing is still in its nascency. To leave adolescence and take its place on a par with the cloud computing paradigm, finally forming a versatile edge-cloud environment, the newcomer needs to overcome a number of challenges. First of all, the computing infrastructure to deploy edge applications and services is very limited at the moment. Indeed, there are initiatives supported by the telecommunication industry, like Multi-access Edge Computing. Also, cloud providers plan to establish their facilities near the edge of the network. However, we believe that even more efforts will be required to make edge servers generally available. Second, to emerge and function efficiently, the ecosystem of edge computing needs practices, standards, and governance mechanisms of its own kind. The specificity originates from the highly dispersed nature of the edge, implying high heterogeneity of resources and diverse administrative control over the computing facilities. Finally, the third challenge is the dynamicity of the edge computing environment due to, e.g., varying demand, migrating clients, etc. In this thesis, we outline underlying principles of what we call Open Infrastructure for Edge (OpenIE), identify its key features, and provide solutions for them. Intended to tackle the challenges we mentioned above, OpenIE defines a set of common practices and loosely coupled technologies creating a unified environment out of highly heterogeneous and administratively partitioned edge computing resources. Particularly, we design a protocol capable of discovering edge providers on a global scale. Further, we propose a framework of Ingelligent Containers (ICONs), capable of autonomous decision making and forming a service overlay on a large-scale edge-cloud setting. As edge providers need to be economically incentivized, we devise a truthful double auction mechanism where edge providers can meet application owners or administrators in need of deploying an edge service. Due to truthfulness, in our auction, it is the best strategy for all participants to bid one's privately known valuation (or cost), thus making complex market behavior strategies obsolete. We analyze the potential of distributed ledgers to serve for OpenIE decentralized agreement and transaction handling and show how our auction can be implemented with the help of distributed ledgers. With the key building blocks of OpenIE, mentioned above, we hope to make an entrance for anyone interested in service provisioning at the edge as easy as possible. We hope that with the emergence of independent edge providers, edge computing will finally become pervasive.Reunalaskenta, joka tuo laskentakapasiteettia lähemmäksi loppukäyttäjiä ja datan tuottajia, on noussut uudentyyppisten sovelluksien, kuten virtuaalisen ja lisätyn todellisuuden (VR/AR) sekä esineiden internetin (IoT) keskeiseksi mahdollistajaksi. Reunalaskennan kehitystä tukevat pääosin kaksi sen tuomaa etua. Ensiksi, reunalaskenta minimoi loppukäyttäjien kokemaa latenssia mahdollistaen uudentyyppisiä sovelluksia. Toiseksi, reunalaskenta säästää ydinverkon tiedonsiirtokapasiteettia, esimerkiksi IoT-laitteiden pilveen lähettämien tietojen osalta. Kun reunapalvelimet analysoivat ja aggregoivat IoT-virrat, verkkokapasiteettia tarvitaan paljon vähemmän. Reunalaskentaan on panostettu paljon, sekä teollisuuden, että tutkimuksen osalta. Reunalaskennan kehittymispolulla monipuoliseksi reunapilviympäristöksi on edessä useita haasteita. Ensinnäkin laskentakapasiteetti tietoverkkojen reunalla on tällä hetkellä hyvin rajallinen. Vaikka teleoperaattorit ja pilvipalvelujen tarjoajat suunnittelevat lisäävänsä laskentakapasiteettia reunalaskennan tarpeisiin, uskomme kuitenkin, että enemmän ponnisteluja tarvitaan, jotta reunalaskennan edut olisivat yleisesti saatavilla. Toiseksi, toimiakseen tehokkaasti, reunalaskennan ekosysteemi tarvitsee omat käytäntönsä, standardinsa ja hallintamekanisminsa. Reunalaskenan erityistarpeet johtuvat resurssien heterogeenisyydestä, niiden suuresta maantieteellisesta hajautuksesta ja hallinnollisesta jaosta. Kolmas haaste on reunalaskentaympäristön dynaamisuus, joka johtuu esimerkiksi vaihtelevasta kysynnästä ja asiakkaiden liikkuvuudesta. Tässä väitöstutkimuksessa esittelemme Avoimen Infrastruktuurin Reunalaskennalle (OpenIE), joka vastaa edellä mainittuihin haasteisiin, ja tunnistamme ongelman pääominaisuudet ja tarjoamme niihin ratkaisuja. OpenIE määrittelee joukon yleisiä käytäntöjä ja löyhästi yhdistettyjä tekniikoita, jotka luovat yhtenäisen ympäristön erittäin heterogeenisistä ja hallinnollisesti jaetuista reunalaskentaresursseista. Suunnittelemme protokollan, joka kykenee etsimään reunaoperaattoreita maailmanlaajuisesti. Lisäksi ehdotamme Älykontti (ICON) -kehystä, joka kykenee itsenäiseen päätöksentekoon ja muodostaa palvelupäällysteen laajamittaisessa reunapilviympäristössä. Koska reunaoperaattoreita on kannustettava taloudellisesti, suunnittelemme totuudenmukaisen huutokauppamekanismin, jossa reunapalveluntarjoajat voivat kohdata sovellusten omistajia tai järjestelmien omistajia, jotka tarvitsevat reunalaskentakapasiteettia. Totuudenmukaisessa huutokaupassa paras strategia kaikille osallistujille on tehdä tarjous yksityisesti tunnetun arvostuksen perusteella, mikä tekee monimutkaisen markkinastrategian kehittämisen tarpeettomaksi. Analysoimme lohkoketjualustojen potentiaalia palvella OpenIE:n hajautetun sopimisen ja tapahtumien käsittelyä ja näytämme, miten huutokauppamme voidaan toteuttaa lohkoketjuteknologia hyödyntäen. Edellä mainittujen OpenIE:n keskeisten kompponenttien avulla pyrimme luomaan yleisiä puitteita joiden avulla jokainen reunalaskennan kapasiteetin tarjoamisesta kiinnostunut taho voisi ryhtyä palveluntarjojaksi helposti. Riippumattomien reunapalveluntarjoajien mukaantulo tekisi reunalaskennan lupaamat hyödyt yleisesti saataviksi

    Libro de Actas JCC&BD 2018 : VI Jornadas de Cloud Computing & Big Data

    Get PDF
    Se recopilan las ponencias presentadas en las VI Jornadas de Cloud Computing & Big Data (JCC&BD), realizadas entre el 25 al 29 de junio de 2018 en la Facultad de Informática de la Universidad Nacional de La Plata.Universidad Nacional de La Plata (UNLP) - Facultad de Informátic

    Consistent high performance and flexible congestion control architecture

    Get PDF
    The part of TCP software stack that controls how fast a data sender transfers packets is usually referred as congestion control, because it was originally introduced to avoid network congestion of multiple competing flows. During the recent 30 years of Internet evolution, traditional TCP congestion control architecture, though having a army of specially-engineered implementations and improvements over the original software, suffers increasingly more from surprisingly poor performance in today's complicated network conditions. We argue the traditional TCP congestion control family has little hope of achieving consistent high performance due to a fundamental architectural deficiency: hardwiring packet-level events to control responses. In this thesis, we propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender continuously observes the connection between its rate control actions and empirically experienced performance, enabling it to use intelligent control algorithms to consistently adopt actions that result in high performance. We first build the above foundation of PCC architecture analytically prove the viability of this new congestion control architecture. Specifically, we show that, controversial to intuition, with certain form of utility function and a theoretically simplified rate control algorithm, selfishly competing senders converge to a fair and stable Nash Equilibrium. With this architectural and theoretical guideline, we then design and implement the first congestion control protocol in PCC family: PCC Allegro. PCC Allegro immediate demonstrates its architectural benefits with significant, often more than 10X, performance gain on a wide spectrum of challenging network conditions. With these very encouraging performance validation, we further advance PCC's architecture on both utilty function framework and the learning rate control algorithm. Taking a principled approach using online learning theory, we designed PCC Vivace with a new strictly socially concave utility function framework and a gradient-ascend based learning rate control algorithm. PCC Vivace significantly improves performance on fast-changing networks, yields better tradeoff in convergence speed and stability and better TCP friendliness comparing to PCC Allegro and other state-of-art new congestion control protocols. Moreover, PCC Vivace's expressive utility function framework can be tuned differently at different competing flows to produce predictable converged throughput ratios for each flow. This opens significant future potential for PCC Vivace in centrally control networking paradigm like Software Defined Networks (SDN). Finally, with all these research advances, we aim to push PCC architecture to production use with a a user-space tunneling proxy and successfully integration with Google's QUIC transport framework

    Raspberry Pi Technology

    Get PDF

    A mixed-method empirical study of Function-as-a-Service software development in industrial practice

    Get PDF
    Function-as-a-Service (FaaS) describes cloud computing services that make infrastructure components transparent to application developers, thus falling in the larger group of “serverless” computing models. When using FaaS offerings, such as AWS Lambda, developers provide atomic and short-running code for their functions, and FaaS providers execute and horizontally scale them on-demand. Currently, there is no systematic research on how developers use serverless, what types of applications lend themselves to this model, or what architectural styles and practices FaaS-based applications are based on. We present results from a mixed-method study, combining interviews with practitioners who develop applications and systems that use FaaS, a systematic analysis of grey literature, and a Web-based survey. We find that successfully adopting FaaS requires a different mental model, where systems are primarily constructed by composing pre-existing services, with FaaS often acting as the “glue” that brings these services together. Tooling availability and maturity, especially related to testing and deployment, remains a major difficulty. Further, we find that current FaaS systems lack systematic support for function reuse, and abstractions and programming models for building non-trivial FaaS applications are limited. We conclude with a discussion of implications for FaaS providers, software developers, and researchers

    A mixed-method empirical study of Function-as-a-Service software development in industrial practice

    Get PDF
    Function-as-a-Service (FaaS) describes cloud computing services that make infrastructure components transparent to application developers, thus falling in the larger group of “serverless” computing mod- els. When using FaaS offerings, such as AWS Lambda, developers provide atomic and short-running code for their functions, and FaaS providers execute and horizontally scale them on-demand. Currently, there is nosystematic research on how developers use serverless, what types of applications lend themselves to this model, or what architectural styles and practices FaaS-based applications are based on. We present results from a mixed-method study, combining interviews with practitioners who develop applications and systems that use FaaS, a systematic analysis of grey literature, and a Web-based survey. We find that successfully adopting FaaS requires a different mental model, where systems are primarily constructed by composing pre-existing services, with FaaS often acting as the “glue” that brings these services to- gether. Tooling availability and maturity, especially related to testing and deployment, remains a major difficulty. Further, we find that current FaaS systems lack systematic support for function reuse, and ab- stractions and programming models for building non-trivial FaaS applications are limited. We conclude with a discussion of implications for FaaS providers, software developers, and researchers

    Earth Observation Open Science and Innovation

    Get PDF
    geospatial analytics; social observatory; big earth data; open data; citizen science; open innovation; earth system science; crowdsourced geospatial data; citizen science; science in society; data scienc

    Optimization and Communication in UAV Networks

    Get PDF
    UAVs are becoming a reality and attract increasing attention. They can be remotely controlled or completely autonomous and be used alone or as a fleet and in a large set of applications. They are constrained by hardware since they cannot be too heavy and rely on batteries. Their use still raises a large set of exciting new challenges in terms of trajectory optimization and positioning when they are used alone or in cooperation, and communication when they evolve in swarm, to name but a few examples. This book presents some new original contributions regarding UAV or UAV swarm optimization and communication aspects

    Rule-Makers or Rule-Takers? Exploring the Transatlantic Trade and Investment Partnership

    Get PDF
    The Transatlantic Trade and Investment Partnership (TTIP) is an effort by the United States and the European Union to reposition themselves for a world of diffuse economic power and intensified global competition. It is a next-generation economic negotiation that breaks the mould of traditional trade agreements. At the heart of the ongoing talks is the question whether and in which areas the two major democratic actors in the global economy can address costly frictions generated by their deep commercial integration by aligning rules and other instruments. The aim is to reduce duplication in various ways in areas where levels of regulatory protection are equivalent as well as to foster wide-ranging regulatory cooperation and set a benchmark for high-quality global norms. In this volume, European and American experts explain the economic context of TTIP and its geopolitical implications, and then explore the challenges and consequences of US-EU negotiations across numerous sensitive areas, ranging from food safety and public procurement to economic and regulatory assessments of technical barriers to trade, automotive, chemicals, energy, services, investor-state dispute settlement mechanisms and regulatory cooperation. Their insights cut through the confusion and tremendous public controversies now swirling around TTIP, and help decision-makers understand how the United States and the European Union can remain rule-makers rather than rule-takers in a globalising world in which their relative influence is waning
    corecore