16 research outputs found

    Data Analytics as a Service: A look inside the PANACEA project

    Get PDF

    In-Production Continuous Testing for Future Telco Cloud

    Get PDF
    Software Defined Networking (SDN) is an emerging paradigm to design, build and operate networks. The driving motivation of SDN was the need for a major change in network technologies to support a configuration, management, operation, reconfiguration and evolution than in current computer networks. In the SDN world, performance it is not only related to the behaviour of the data plane. As the separation of control plane and data plane makes the latter significantly more agile, it lays off all the complex processing workload to the control plane. This is further exacerbated in distributed network controller, where the control plane is additionally loaded with the state synchronization overhead. Furthermore, the introduction of SDNs technologies has raised advanced challenges in achieving failure resilience, meant as the persistence of service delivery that can justifiably be trusted, when facing changes, and fault tolerance, meant as the ability to avoid service failures in the presence of faults. Therefore, along with the “softwarization” of network services, it is an important goal in the engineering of such services, e.g. SDNs and NFVs, to be able to test and assess the proper functioning not only in emulated conditions before release and deployment, but also “in-production”, when the system is under real operating conditions.   The goal of this thesis is to devise an approach to evaluate not only the performance, but also the effectiveness of the failure detection, and mitigation mechanisms provided by SDN controllers, as well as the capability of the SDNs to ultimately satisfy nonfunctional requirements, especially resiliency, availability, and reliability. The approach consists of exploiting benchmarking techniques, such as the failure injection, to get continuously feedback on the performance as well as capabilities of the SDN services to survive failures, which is of paramount importance to improve the effective- ness of the system internal mechanisms in reacting to anomalous situations potentially occurring in operation, while its services are regularly updated or improved. Within this vision, this dissertation first presents SCP-CLUB (SDN Control Plane CLoUd-based Benchmarking), a benchmarking frame- work designed to automate the characterization of SDN control plane performance, resilience and fault tolerance in telco cloud deployments. The idea is to provide the same level of automation available in deploying NFV function, for the testing of different configuration, using idle cycles of the telco cloud infrastructure. Then, the dissertation proposes an extension of the framework with mechanisms to evaluate the runtime behaviour of a Telco Cloud SDN under (possibly unforeseen) failure conditions, by exploiting the software failure injection

    Virtual Machine Image Management for Elastic Resource Usage in Grid Computing

    Get PDF
    Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data. In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data. The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers. One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system. In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system. A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used. In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized. Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users

    Designing and Deploying Internet of Things Applications in the Industry: An Empirical Investigation

    Get PDF
    RÉSUMÉ : L’Internet des objets (IdO) a pour objectif de permettre la connectivité à presque tous les objets trouvés dans l’espace physique. Il étend la connectivité aux objets de tous les jours et o˙re la possibilité de surveiller, de suivre, de se connecter et d’intéragir plus eÿcacement avec les actifs industriels. Dans l’industrie de nos jours, les réseaux de capteurs connectés surveillent les mouvements logistiques, fabriquent des machines et aident les organisations à améliorer leur eÿcacité et à réduire les coûts. Cependant, la conception et l’implémentation d’un réseau IdO restent, aujourd’hui, une tâche particulièrement diÿcile. Nous constatons un haut niveau de fragmentation dans le paysage de l’IdO, les développeurs se complaig-nent régulièrement de la diÿculté à intégrer diverses technologies avec des divers objets trouvés dans les systèmes IdO et l’absence des directives et/ou des pratiques claires pour le développement et le déploiement d’application IdO sûres et eÿcaces. Par conséquent, analyser et comprendre les problèmes liés au développement et au déploiement de l’IdO sont primordiaux pour permettre à l’industrie d’exploiter son plein potentiel. Dans cette thèse, nous examinons les interactions des spécialistes de l’IdO sur le sites Web populaire, Stack Overflow et Stack Exchange, afin de comprendre les défis et les problèmes auxquels ils sont confrontés lors du développement et du déploiement de di˙érentes appli-cations de l’IdO. Ensuite, nous examinons le manque d’interopérabilité entre les techniques développées pour l’IdO, nous étudions les défis que leur intégration pose et nous fournissons des directives aux praticiens intéressés par la connexion des réseaux et des dispositifs de l’IdO pour développer divers services et applications. D’autre part, la sécurité étant essen-tielle au succès de cette technologie, nous étudions les di˙érentes menaces et défis de sécurité sur les di˙érentes couches de l’architecture des systèmes de l’IdO et nous proposons des contre-mesures. Enfin, nous menons une série d’expériences qui vise à comprendre les avantages et les incon-vénients des déploiements ’serverful’ et ’serverless’ des applications de l’IdO afin de fournir aux praticiens des directives et des recommandations fondées sur des éléments probants relatifs à de tels déploiements. Les résultats présentés représentent une étape très importante vers une profonde compréhension de ces technologies très prometteuses. Nous estimons que nos recommandations et nos suggestions aideront les praticiens et les bâtisseurs technologiques à améliorer la qualité des logiciels et des systèmes de l’IdO. Nous espérons que nos résultats pourront aider les communautés et les consortiums de l’IdO à établir des normes et des directives pour le développement, la maintenance, et l’évolution des logiciels de l’IdO.----------ABSTRACT : Internet of Things (IoT) aims to bring connectivity to almost every object found in the phys-ical space. It extends connectivity to everyday things, opens up the possibility to monitor, track, connect, and interact with industrial assets more eÿciently. In the industry nowadays, we can see connected sensor networks monitor logistics movements, manufacturing machines, and help organizations improve their eÿciency and reduce costs as well. However, designing and implementing an IoT network today is still a very challenging task. We are witnessing a high level of fragmentation in the IoT landscape and developers regularly complain about the diÿculty to integrate diverse technologies of various objects found in IoT systems, and the lack of clear guidelines and–or practices for developing and deploying safe and eÿcient IoT applications. Therefore, analyzing and understanding issues related to the development and deployment of the Internet of Things is utterly important to allow the industry to utilize its fullest potential. In this thesis, we examine IoT practitioners’ discussions on the popular Q&A websites, Stack Overflow and Stack Exchange, to understand the challenges and issues that they face when developing and deploying di˙erent IoT applications. Next, we examine the lack of interoper-ability among technologies developed for IoT and study the challenges that their integration poses and provide guidelines for practitioners interested in connecting IoT networks and de-vices to develop various services and applications. Since security issues are center to the success of this technology, we also investigate di˙erent security threats and challenges across di˙erent layers of the architecture of IoT systems and propose countermeasures. Finally, we conduct a series of experiments to understand the advantages and trade-o˙s of serverful and serverless deployments of IoT applications in order to provide practitioners with evidence-based guidelines and recommendations on such deployments. The results presented in this thesis represent a first important step towards a deep understanding of these very promising technologies. We believe that our recommendations and suggestions will help practitioners and technology builders improve the quality of IoT software and systems. We also hope that our results can help IoT communities and consortia establish standards and guidelines for the development, maintenance, and evolution of IoT software and systems

    Functional programming languages in computing clouds: practical and theoretical explorations

    Get PDF
    Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms
    corecore