99 research outputs found

    Enabling Network Flexibility by Decomposing Network Functions

    Get PDF
    Next-generation networks are expected to serve a wide range of use cases, each of which features a set of diverse and stringent requirements. For instance, video streaming and industrial automation are becoming more and more prominent in our society, but while the first use case requires high bandwidth, the second one mandates sub-millisecond latency. To accommodate these requirements, networks must be flexible, i.e., they must provide cost-efficient ways of adapting to different requirements. For example, networks must be able to scale with the traffic load to support the bandwidth requirements of the video streaming use case. In response to the need for flexibility, the scientific community has proposed Software Defined Networking (SDN), Network Function Virtualization (NFV), and network slicing. SDN simplifies the management of networks by separating control plane and data plane, while NFV allows scaling the network functions with the traffic load. Network slicing provides the operators with virtual networks which can be tailored to meet the requirements of the use cases. While these technologies pave the way towards network flexibility, the capability of networks to adapt to different use cases is still limited by several inefficiencies. For example, to improve the scalability of network functions, network operators use dedicated systems which manage the state of network functions by keeping it in a data store. These systems are designed to offer specific features, such as reliability or performance, which determine the data store adopted and the Application Programming Interface (API) exposed to the network functions. Network operators need to change the data store depending on the features required by the use case served, but this operation involves refactoring the network functions, thus implying significant costs. Furthermore, network operators need to migrate the network functions, for example to minimize bandwidth usage during traffic peaks. Nevertheless, network slices convey the traffic coming from a multitude of sources through a small set of network functions, which are consequently resource-hungry and difficult to migrate, forcing the network operator to overprovision the network. Due to these inefficiencies, adapting the network to different use cases requires a significant increase in both Capital Expenditure (CapEx) and Operational Expenditure (OpEx), thus resulting in a showstopper for network operators. Addressing these inefficiencies would lower the costs of adapting networks to different use cases, thus improving network flexibility. To this end, we propose to decompose the network functions into fine-grained network functions, each providing only a subset of the functionalities, or processing only a share of the traffic, thus obtaining network functions which are less resource-hungry, easier to migrate, and easier to upgrade. We examine three directions along which we can perform the decomposition. The first direction is leveraging the networking planes, such as control and data planes, for example separating the functionalities for packet processing from the ones for network management. The second direction is leveraging the sources and destinations of the traffic flowing through each network function and creating a dedicated network function for each source-destination pair. The third direction is decoupling the state management of the network functions from the data store by leveraging an API which is independent from the data store adopted. We show that each decomposition addresses a specific inefficiency. For example, decoupling the state management from the data store enables network operators to change the data store adopted without the need for refactoring the network functions. Decomposing network functions also brings some drawbacks. For example, it can result in an increase of the number of network functions, thus making network management tasks, such as network reconfiguration, more challenging. We study two key drawbacks and we discuss the solutions we designed to contrast them. In this thesis, we show that decomposing network functions allows improving network flexibility, but it must be complemented with techniques to mitigate any negative side effect.Uuden sukupolven verkkojen odotetaan palvelevan monenlaisia käyttötapauksia, joista jokaisella on erilaisia vaatimuksia verkon toimintojen ja ominaisuuksien suhteen. Esimerkiksi videoiden suoratoisto ja teollisuusautomaatio ovat yhä tärkeämmässä asemassa yhteiskunnassamme, mutta vaikka ensimmäinen käyttötapaus vaatii suurta kaistanleveyttä, toinen edellyttää alle millisekunnin viivettä. Näiden vaatimusten täyttämiseksi verkkojen on oltava joustavia, toisin sanoen niiden on tarjottava kustannustehokkaita tapoja sopeutua erilaisiin vaatimuksiin. Vastauksena joustavuuden tarpeeseen tiedeyhteisö on ehdottanut ohjelmistopohjaista verkkoa (Software Defined Networking, SDN), verkkotoimintojen virtualisointia (Network Function Virtualization, NFV) ja verkon viipalointia (network slicing). SDN yksinkertaistaa verkkojen hallintaa erottamalla ohjaustason ja datatason, kun taas NFV sallii verkon toimintojen skaalaamisen liikenteen kuormituksella. Verkon viipalointi tarjoaa operaattoreille virtuaaliverkkoja, jotka voidaan räätälöidä vastaamaan käyttötapausten vaatimuksia. Vaikka nämä tekniikat tasoittavat tietä verkon joustavuuteen, verkkojen kykyä sopeutua erilaisiin käyttötapauksiin rajoittavat edelleen monet tehottomuudet. Esimerkiksi verkkotoimintojen skaalautuvuuden parantamiseksi verkko-operaattorit käyttävät erillisiä tilatiedon tallennusjärjestelmiä. Verkko-operaattorien on vaihdettava tietovarasto palvelun käyttötapauksessa vaadittujen ominaisuuksien mukaan, mutta tähän toimintaan sisältyy verkkotoimintojen uudelleenrakentaminen, mikä merkitsee merkittäviä kustannuksia. Näiden tehottomuuksien vuoksi verkon sopeuttaminen erilaisiin käyttötapauksiin edellyttää sekä investointien (Capital Expenditure, CapEx) että toimintamenojen (Operational Expenditure, OpEx) merkittävää kasvua. Tässä väitöskirjassa esitetään uusi menetelmä verkkotoimintojen osittamiseen sekä hajauttamiseen hienorakeisemmiksi toiminnoiksi, joista kukin tarjoaa osan alkuperäisestä toiminnallisuudesta. Menetelmän avulla saadaan hajautettuja ja yhteentoimivia verkkotoimintoja, jotka käyttävät vähemmän verkon resursseja ja ovat helpommin siirrettävissä ja käytettävissä erilaisissa käyttötapauksissa. Väitöskirja osoittaa, että kukin osa-alue auttaa korjaamaan tietyn tehottomuuden järjestelmässä. Esimerkiksi tilahallinnan eriyttäminen tietovarastosta antaa verkko-operaattoreille mahdollisuuden muuttaa käyttöön otettua tietovarastoa ilman, että verkkotoimintoja on muutettava. Verkkotoimintojen ositus ja hajautus voi myös joissain tilanteissa heikentää tietoverkon ominaisuuksia. Väitöskirja tutkii menetelmän keskeisiä heikkouksia ja esittää niihin ratkaisuita. Tässä tutkimuksessa osoitetaan, että verkkotoimintojen osittaminen ja hajauttaminen parantavat verkon joustavuutta, mutta menetelmää on täydennettävä mahdollisten haitallisten sivuvaikutusten lieventämiseksi

    EMPIRICAL ASSESSMENT OF THE IMPACT OF USING AUTOMATIC STATIC ANALYSIS ON CODE QUALITY

    Get PDF
    Automatic static analysis (ASA) tools analyze the source or compiled code looking for violations of recommended programming practices (called issues) that might cause faults or might degrade some dimensions of software quality. Antonio Vetro' has focused his PhD in studying how applying ASA impacts software quality, taking as reference point the different quality dimensions specified by the standard ISO/IEC 25010. The epistemological approach he used is that one of empirical software engineering. During his three years PhD, he's been conducting experiments and case studies on three main areas: Functionality/Reliability, Performance and Maintainability. He empirically proved that specific ASA issues had impact on these quality characteristics in the contexts under study: thus, removing them from the code resulted in a quality improvement. Vetro' has also investigated and proposed new research directions for this field: using ASA to improve software energy efficiency and to detect the problems deriving from the interaction of multiple languages. The contribution is enriched with the final recommendation of a generalized process for researchers and practitioners with a twofold goal: improve software quality through ASA and create a body of knowledge on the impact of using ASA on specific software quality dimensions, based on empirical evidence. This thesis represents a first step towards this goa

    Contribution to Quality-driven Evolutionary Software Development process for Service-Oriented Architectures

    Get PDF
    The quality of software is a key element for the successful of a system. Currently, with the advance of the technology, consumers demand more and better services. Models for the development process have also to be adapted to new requirements. This is particular true in the case of service oriented systems (domain of this thesis), where an unpredictable number of users can access to one or several services. This work proposes an improvement in the models for the software development process based on the theory of the evolutionary software development. The main objective is to maintain and improve the quality of software as long as possible and with the minimum effort and cost. Usually, this process is supported on methods known in the literature as agile software development methods. Other key element in this thesis is the service oriented software architecture. Software architecture plays an important role in the quality of any software system. The Service oriented architecture adds the service flexibility, the services are autonomous and compact assets, and they can be improved and integrated with better facility. The proposed model in this thesis for evolutionary software development makes emphasis in the quality of services. Therefore, some principles of evolutionary development are redefined and new processes are introduced, such as: architecture assessment, architecture recovery and architecture conformance. Every new process will be evaluated with case studies considering quality aspects. They have been selected according to the market demand, they are: the performance, security and evolutionability. Other aspects could be considered of the same way than the three previous, but we believe that these quality attributes are enough to demonstrate the viability of our proposal

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    2013 IMSAloquium, Student Investigation Showcase

    Get PDF
    This year, we are proudly celebrating the twenty-fifth anniversary of IMSA’s Student Inquiry and Research (SIR) Program. Our first IMSAloquium, then called Presentation Day, was held in 1989 with only ten presentations; this year we are nearing two hundred.https://digitalcommons.imsa.edu/archives_sir/1005/thumbnail.jp

    Subcellular phenomena in colorectal cancer

    Get PDF
    The Wnt signalling pathway is involved in stem cell maintenance, differentiation and tissue development, and in so doing plays a key role in controlling the homeostasis of colorectal crypts. In response to an external Wnt stimulus, the intracellular levels of the protein beta-catenin are regulated by the proteins which make up the Wnt signalling pathway. Abnormalities in the Wnt signalling pathway have been implicated in the initiation of colorectal and other cancers. In this thesis we analyse and simplify existing models of the Wnt signalling pathway, formulate models for Wnt's control of the cell cycle in a single cell, and incorporate these into a multiscale model to describe how Wnt may control the patterns of proliferation in a colorectal crypt. A systematic asymptotic analysis of an existing ODE-based model of the Wnt signalling pathway is undertaken, highlighting the operation of different pathway components over three different timescales. Guided by this analysis we derive a simplified model which is shown to retain the essential behaviour of the Wnt pathway, recreating the accumulation and degradation of beta-catenin. We utilise our simple model by coupling it to a model of the cell cycle. Our findings agree well with the observed patterns of proliferation in healthy colon crypts. Furthermore, the model clarifies a mechanism by which common colorectal cancer mutations may cause elevated beta-catenin and Cyclin~D levels, leading to uncontrolled cell proliferation and thereby initiating colorectal cancer. A second model for the influence of the Wnt pathway on the cell cycle is constructed to incorporate the results of a recent set of knockout experiments. This model reproduces the healthy proliferation observed in crypts and additionally recreates the results of knockout experiments by additionally including the influence of Myc and CDK4 on the cell cycle. Analysis of this model leads us to suggest novel drug targets that may reverse the effects of an early mutation in the Wnt pathway. We have helped to build a flexible software environment for cell-based simulations of healthy and cancerous tissues. We discuss the software engineering approach that we have used to develop this environment, and its suitability for scientific computing. We then use this software to perform multiscale simulations with subcellular Wnt signalling models inside individual cells, the cells forming an epithelial crypt tissue. We have used the multiscale model to compare the effect of different subcellular models on crypt dynamics and predicting the distribution of beta-catenin throughout the crypt. We assess the extent to which a common experiment reveals the actual dynamics of a crypt and finally explain some recent mitochondrial-DNA experiments in terms of cell dynamics

    06. 2005 Seventeenth Annual IMSA Presentation Day

    Get PDF
    https://digitalcommons.imsa.edu/class_of_2005/1004/thumbnail.jp

    2005 Seventeenth Annual IMSA Presentation Day

    Get PDF
    The Student Inquiry and Research Program fosters the development of students as highly skilled and integrative problem finders, problem solvers, and apprentice investigators, all skills required to succeed in the global workplace of the 21 Century.https://digitalcommons.imsa.edu/archives_sir/1017/thumbnail.jp
    • …
    corecore