320 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Leveraging Cloud-based NFV and SDN Platform Towards Quality-Driven Next-Generation Mobile Networks

    Get PDF
    Network virtualization has become a key approach for Network Service Providers (NSPs) to mitigate the challenge of the continually increasing demands for network services. Tightly coupled with their software components, legacy network devices are difficult to upgrade or modify to meet the dynamically changing end-user needs. To virtualize their infrastructure and mitigate those challenges, NSPs have started to adopt Software Defined Networking (SDN) and Network Function Virtualization (NFV). To this end, this thesis addresses the challenges faced on the road of transforming the legacy networking infrastructure to a more dynamic and agile virtualized environment to meet the rapidly increasing demand for network services and serve as an enabler for key emerging technologies such as the Internet of Things (IoT) and 5G networking. The thesis considers different approaches and platforms to serve as an NFV/SDN based cloud applications while closely considering how such an environment deploys its virtualized services to optimize the network and reducing their costs. The thesis starts first by defining the standards of adopting microservices as architecture for NFV. Then, it focuses on the latency-aware deployment approach of virtual network functions (VNFs) forming service function chains (SFC) in a cloud environment. This approach ensures that NSPs still meet their strict quality of service and service level agreements while considering both functional and non-functional constraints of the NFV-based applications such as, delay, resource allocation, and intercorrelation between VNF instances. In addition, the thesis proposes a detailed approach on recovering and handling of those instances by optimizing the decision of migrating or re-instantiating the virtualized services upon a sudden event (failure/overload…). All the proposed approaches contribute to the orchestration of NFV applications to meet the requirements of the IoT and NGNs era

    Service selection with qos correlations in distributed service-based systems

    Full text link
    © 2013 IEEE. Service selection is an important research problem in distributed service-based systems, which aims to select proper services to meet user requirements. A number of service selection approaches have been proposed in recent years. Most of them, however, overlook quality-of-service (QoS) correlations, which broadly exist in distributed service-based systems. The concept of QoS correlations involves two aspects: 1) QoS correlations among services and 2) QoS correlations of user requirements. The first aspect means that some QoS attributes of service not only depend on the service itself but also have correlations with other services, e.g., buying service 1 and then getting service 2 with half price. The second aspect means the relationships among QoS attributes of user requirements, e.g., a user can accept a service with fast response time and high service cost or the user can also accept a service with slow response time and low service cost (Fig. 1). These correlations significantly affect user selection of services. Currently, only a few existing approaches have considered QoS correlations among services, i.e., the first aspect, but they still overlook QoS correlations of user requirements, i.e., the second aspect, which are also very important in distributed service-based systems. In this paper, a novel service selection approach is proposed, which not only considers QoS correlations of services but also accounts for QoS correlations of user requirements. This approach, to the best of our knowledge, is the first one which considers QoS correlations of user requirements. Also, this approach is decentralized which can avoid the single point of failure. The experimental results demonstrate the effectiveness of the proposed approach

    Identifying Requirements in Microservice Architectural Systems

    Get PDF
    Microservices and microservice architecture has grown popularity and interest steadily since 2014 but many challenges are still faced in a software project when trying to adopt the concept. This work gathers challenges, possible solutions, and requirements related to the use of microservice architecture and therefore support the work of different stakeholders in a software project using microservice architecture, while also providing more information to the research as well. The study was conducted using systematic literature review (SLR). Overall, 63 scientific publications from four different scientific databases were selected and analysed. As a result, rapid evolution, life cycle management, complexity, performance, and a large number of integrations were identified as the most common challenges of microservice architecture. Solutions such as service orchestration, fog computing, decentralized data, and use of patterns were proposed to tackle these challenges. Regarding requirements, scalability, efficiency, flexibility, loose coupling, performance, and security appeared most frequently in the literature. The key finding of this work was the importance of data. How data acts as a base for functionalities and when inaccurate can cause complex challenges and make functionalities worthless. Based on this, we have a better understanding on what challenges may occur and what to focus on while working with microservice architecture in software development

    Risk-based maintenance of critical and complex systems

    Get PDF
    Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2016-2017.De nos jours, la plupart des systèmes dans divers secteurs critiques tels que l'aviation, le pétrole et les soins de santé sont devenus très complexes et dynamiques, et par conséquent peuvent à tout moment s'arrêter de fonctionner. Pour éviter que cela ne se reproduise et ne devienne incontrôlable ce qui engagera des pertes énormes en matière de coûts et d'indisponibilité; l'adoption de stratégies de contrôle et de maintenance s'avèrent plus que nécessaire et même vitale. Dans le génie des procédés, les stratégies optimales de maintenance pour ces systèmes pourraient avoir un impact significatif sur la réduction des coûts et sur les temps d'arrêt, sur la maximisation de la fiabilité et de la productivité, sur l'amélioration de la qualité et enfin pour atteindre les objectifs souhaités des compagnies. En outre, les risques et les incertitudes associés à ces systèmes sont souvent composés de plusieurs relations de cause à effet de façon extrêmement complexe. Cela pourrait mener à une augmentation du nombre de défaillances de ces systèmes. Par conséquent, un outil d'analyse de défaillance avancée est nécessaire pour considérer les interactions complexes de défaillance des composants dans les différentes phases du cycle de vie du produit pour assurer les niveaux élevés de sécurité et de fiabilité. Dans cette thèse, on aborde dans un premier temps les lacunes des méthodes d'analyse des risques/échec et celles qui permettent la sélection d'une classe de stratégie de maintenance à adopter. Nous développons ensuite des approches globales pour la maintenance et l'analyse du processus de défaillance fondée sur les risques des systèmes et machines complexes connus pour être utilisées dans toutes les industries. Les recherches menées pour la concrétisation de cette thèse ont donné lieu à douze contributions importantes qui se résument comme suit: Dans la première contribution, on aborde les insuffisances des méthodes en cours de sélection de la stratégie de maintenance et on développe un cadre fondé sur les risques en utilisant des méthodes dites du processus de hiérarchie analytique (Analytical Hierarchy Process (AHP), de cartes cognitives floues (Fuzzy Cognitive Maps (FCM)), et la théorie des ensembles flous (Fuzzy Soft Sets (FSS)) pour sélectionner la meilleure politique de maintenance tout en considérant les incertitudes. La deuxième contribution aborde les insuffisances de la méthode de l'analyse des modes de défaillance, de leurs effets et de leur criticité (AMDEC) et son amélioration en utilisant un modèle AMDEC basée sur les FCM. Les contributions 3 et 4, proposent deux outils de modélisation dynamique des risques et d'évaluation à l'aide de la FCM pour faire face aux risques de l'externalisation de la maintenance et des réseaux de collaboration. Ensuite, on étend les outils développés et nous proposons un outil d'aide à la décision avancée pour prédire l'impact de chaque risque sur les autres risques ou sur la performance du système en utilisant la FCM (contribution 5).Dans la sixième contribution, on aborde les risques associés à la maintenance dans le cadre des ERP (Enterprise Resource Planning (ERP)) et on propose une autre approche intégrée basée sur la méthode AMDEC floue pour la priorisation des risques. Dans les contributions 7, 8, 9 et 10, on effectue une revue de la littérature concernant la maintenance basée sur les risques des dispositifs médicaux, puisque ces appareils sont devenus très complexes et sophistiqués et l'application de modèles de maintenance et d'optimisation pour eux est assez nouvelle. Ensuite, on développe trois cadres intégrés pour la planification de la maintenance et le remplacement de dispositifs médicaux axée sur les risques. Outre les contributions ci-dessus, et comme étude de cas, nous avons réalisé un projet intitulé “Mise à jour de guide de pratique clinique (GPC) qui est un cadre axé sur les priorités pour la mise à jour des guides de pratique cliniques existantes” au centre interdisciplinaire de recherche en réadaptation et intégration sociale du Québec (CIRRIS). Nos travaux au sein du CIRRIS ont amené à deux importantes contributions. Dans ces deux contributions (11e et 12e) nous avons effectué un examen systématique de la littérature pour identifier les critères potentiels de mise à jour des GPCs. Nous avons validé et pondéré les critères identifiés par un sondage international. Puis, sur la base des résultats de la onzième contribution, nous avons développé un cadre global axé sur les priorités pour les GPCs. Ceci est la première fois qu'une telle méthode quantitative a été proposée dans la littérature des guides de pratiques cliniques. L'évaluation et la priorisation des GPCs existants sur la base des critères validés peuvent favoriser l'acheminement des ressources limitées dans la mise à jour de GPCs qui sont les plus sensibles au changement, améliorant ainsi la qualité et la fiabilité des décisions de santé.Today, most systems in various critical sectors such as aviation, oil and health care have become very complex and dynamic, and consequently can at any time stop working. To prevent this from reoccurring and getting out of control which incur huge losses in terms of costs and downtime; the adoption of control and maintenance strategies are more than necessary and even vital. In process engineering, optimal maintenance strategies for these systems could have a significant impact on reducing costs and downtime, maximizing reliability and productivity, improving the quality and finally achieving the desired objectives of the companies. In addition, the risks and uncertainties associated with these systems are often composed of several extremely complex cause and effect relationships. This could lead to an increase in the number of failures of such systems. Therefore, an advanced failure analysis tool is needed to consider the complex interactions of components’ failures in the different phases of the product life cycle to ensure high levels of safety and reliability. In this thesis, we address the shortcomings of current failure/risk analysis and maintenance policy selection methods in the literature. Then, we develop comprehensive approaches to maintenance and failure analysis process based on the risks of complex systems and equipment which are applicable in all industries. The research conducted for the realization of this thesis has resulted in twelve important contributions, as follows: In the first contribution, we address the shortcomings of the current methods in selecting the optimum maintenance strategy and develop an integrated risk-based framework using Analytical Hierarchy Process (AHP), fuzzy Cognitive Maps (FCM), and fuzzy Soft set (FSS) tools to select the best maintenance policy by considering the uncertainties.The second contribution aims to address the shortcomings of traditional failure mode and effect analysis (FMEA) method and enhance it using a FCM-based FMEA model. Contributions 3 and 4, present two dynamic risk modeling and assessment tools using FCM for dealing with risks of outsourcing maintenance and collaborative networks. Then, we extend the developed tools and propose an advanced decision support tool for predicting the impact of each risk on the other risks or on the performance of system using FCM (contribution 5). In the sixth contribution, we address the associated risks in Enterprise Resource Planning (ERP) maintenance and we propose another integrated approach using fuzzy FMEA method for prioritizing the risks. In the contributions 7, 8, 9, and 10, we perform a literature review regarding the risk-based maintenance of medical devices, since these devices have become very complex and sophisticated and the application of maintenance and optimization models to them is fairly new. Then, we develop three integrated frameworks for risk-based maintenance and replacement planning of medical devices. In addition to above contributions, as a case study, we performed a project titled “Updating Clinical Practice Guidelines; a priority-based framework for updating existing guidelines” in CIRRIS which led to the two important contributions. In these two contributions (11th and 12th) we first performed a systematic literature review to identify potential criteria in updating CPGs. We validated and weighted the identified criteria through an international survey. Then, based on the results of the eleventh contribution, we developed a comprehensive priority-based framework for updating CPGs based on the approaches that we had already developed and applied success fully in other industries. This is the first time that such a quantitative method has been proposed in the literature of guidelines. Evaluation and prioritization of existing CPGs based on the validated criteria can promote channelling limited resources into updating CPGs that are most sensitive to change, thus improving the quality and reliability of healthcare decisions made based on current CPGs. Keywords: Risk-based maintenance, Maintenance strategy selection, FMEA, FCM, Medical devices, Clinical practice guidelines

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    Next-Generation Self-Organizing Networks through a Machine Learning Approach

    Get PDF
    Fecha de lectura de Tesis Doctoral: 17 Diciembre 2018.Para reducir los costes de gestión de las redes celulares, que, con el tiempo, aumentaban en complejidad, surgió el concepto de las redes autoorganizadas, o self-organizing networks (SON). Es decir, la automatización de las tareas de gestión de una red celular para disminuir los costes de infraestructura (CAPEX) y de operación (OPEX). Las tareas de las SON se dividen en tres categorías: autoconfiguración, autooptimización y autocuración. El objetivo de esta tesis es la mejora de las funciones SON a través del desarrollo y uso de herramientas de aprendizaje automático (machine learning, ML) para la gestión de la red. Por un lado, se aborda la autocuración a través de la propuesta de una novedosa herramienta para una diagnosis automática (RCA), consistente en la combinación de múltiples sistemas RCA independientes para el desarrollo de un sistema compuesto de RCA mejorado. A su vez, para aumentar la precisión de las herramientas de RCA mientras se reducen tanto el CAPEX como el OPEX, en esta tesis se proponen y evalúan herramientas de ML de reducción de dimensionalidad en combinación con herramientas de RCA. Por otro lado, en esta tesis se estudian las funcionalidades multienlace dentro de la autooptimización y se proponen técnicas para su gestión automática. En el campo de las comunicaciones mejoradas de banda ancha, se propone una herramienta para la gestión de portadoras radio, que permite la implementación de políticas del operador, mientras que, en el campo de las comunicaciones vehiculares de baja latencia, se propone un mecanismo multicamino para la redirección del tráfico a través de múltiples interfaces radio. Muchos de los métodos propuestos en esta tesis se han evaluado usando datos provenientes de redes celulares reales, lo que ha permitido demostrar su validez en entornos realistas, así como su capacidad para ser desplegados en redes móviles actuales y futuras

    The InfoSec Handbook

    Get PDF
    Computer scienc
    corecore