131 research outputs found

    On the Fly Orchestration of Unikernels: Tuning and Performance Evaluation of Virtual Infrastructure Managers

    Full text link
    Network operators are facing significant challenges meeting the demand for more bandwidth, agile infrastructures, innovative services, while keeping costs low. Network Functions Virtualization (NFV) and Cloud Computing are emerging as key trends of 5G network architectures, providing flexibility, fast instantiation times, support of Commercial Off The Shelf hardware and significant cost savings. NFV leverages Cloud Computing principles to move the data-plane network functions from expensive, closed and proprietary hardware to the so-called Virtual Network Functions (VNFs). In this paper we deal with the management of virtual computing resources (Unikernels) for the execution of VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the instantiation process of virtual resources and propose a generic reference model, starting from the analysis of three open source VIMs, namely OpenStack, Nomad and OpenVIM. We improve the aforementioned VIMs introducing the support for special-purpose Unikernels and aiming at reducing the duration of the instantiation process. We evaluate some performance aspects of the VIMs, considering both stock and tuned versions. The VIM extensions and performance evaluation tools are available under a liberal open source licence

    Improving the Research Environment of High Performance Computing for Non-Cluster Experts Based on Knoppix Instant Computing Technology

    Get PDF
    Abstract. We have designed and implemented a new portable system that can rapidly construct a computer environment where highthroughput research applications can be performed instantly. One challenge in the instant computing area is constructing a cluster system instantly, and then readily restoring it to its former state. This paper presents an approach for instant computing using Knoppix technology that can allow even a non-computer specialist to easily construct and operate a Beowulf cluster . In the present bio-research field, there is now an urgent need to address the nagging problem posed by having highperformance computers. Therefore, we were assigned the task of proposing a way to build an environment where a cluster computer system can be instantly set up. Through such research, we believe that the technology can be expected to accelerate scientific research. However, when employing this technology in bio-research, a capacity barrier exists when selecting a clustered Knoppix system for a data-driven bioinformatics application. We have approached ways to overcome said barrier by using a virtual integrated RAM-DISK to adapt to a parallel file system. To show an actual example using a reference application, we have chosen InterProScan, which is an integrated application prepared by the European Bioinformatics Institute (EBI) that utilizes many database and scan methods. InterProScan is capable of scaling workload with local computational resources, though biology researchers and even bioinformatics researchers find such extensions difficult to set up. We have achieved the purpose of allowing even researchers who are non-cluster experts to easily build a system of "Knoppix for the InterProScan4.1 High Throughput Computing Edition." The system we developed is capable of not only constructing a cluster computer environment composed of 32 computers in about ten minutes (as opposed to six hours when done manually), but also restoring the original environment by rebooting the pre-existing operating system. The goal of our instant cluster computing is to provide an environment in which any target application can be built instantly from anywhere

    Easy management and user interconnection across Grid sites

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaDistributed computing systems are undoubtedly a powerful resource,providing functions that no other system can do. However, their inherent complexity can lead many users and institutions not to consider these systems when faced by challenges posed by the deployment and administration tasks. The first solution for this problem is the European Grid Initiative (EGI) roll, a tool that simplifies and streamlines those tasks, by extending the tools that are currently available for cluster administration to the grid. It allows the infrastructure to be easily scaled and adopted by the institutions that are involved in grid projects such as EGI. The second part of this work consists of a platform that enables the interconnection of computing assets from multiple sources to create a unified pool of resources. It addresses the challenge of building a global computing infrastructure by providing a communication overlay able to deal with the existence of computing facilities located behind NAT devices. The integration of these two tools results in a solution that not only scales the infrastructure by simplifying the deployment and administration, but also enables the interconnection of those resources

    A Reference Architecture for Management of Security Operations in Digital Service Chains

    Get PDF
    Modern computing paradigms (i.e., cloud, edge, Internet of Things) and ubiquitous connectivity have brought the notion of pervasive computing to an unforeseeable level, which boosts service-oriented architectures and microservices patterns to create digital services with data-centric models. However, the resulting agility in service creation and management has not been followed by a similar evolution in cybersecurity patterns, which still largely rest on more conventional device- and infrastructure-centric models. In this Chapter, we describe the implementation of the GUARD Platform, which represents the core element of a modern cybersecurity framework for building detection and analytics services for complex digital service chains. We briefly review the logical components and how they address scientific and technological challenges behind the limitations of existing cybersecurity tools. We also provide validation and performance analysis that show the feasibility and efficiency of our implementation

    Unknown Threat Detection With Honeypot Ensemble Analsyis Using Big Datasecurity Architecture

    Get PDF
    The amount of data that is being generated continues to rapidly grow in size and complexity. Frameworks such as Apache Hadoop and Apache Spark are evolving at a rapid rate as organizations are building data driven applications to gain competitive advantages. Data analytics frameworks decomposes our problems to build applications that are more than just inference and can help make predictions as well as prescriptions to problems in real time instead of batch processes. Information Security is becoming more important to organizations as the Internet and cloud technologies become more integrated with their internal processes. The number of attacks and attack vectors has been increasing steadily over the years. Border defense measures (e.g. Intrusion Detection Systems) are no longer enough to identify and stop attackers. Data driven information security is not a new approach to solving information security; however there is an increased emphasis on combining heterogeneous sources to gain a broader view of the problem instead of isolated systems. Stitching together multiple alerts into a cohesive system can increase the number of True Positives. With the increased concern of unknown insider threats and zero-day attacks, identifying unknown attack vectors becomes more difficult. Previous research has shown that with as little as 10 commands it is possible to identify a masquerade attack against a user\u27s profile. This thesis is going to look at a data driven information security architecture that relies on both behavioral analysis of SSH profiles and bad actor data collected from an SSH honeypot to identify bad actor attack vectors. Honeypots should collect only data from bad actors; therefore have a high True Positive rate. Using Apache Spark and Apache Hadoop we can create a real time data driven architecture that can collect and analyze new bad actor behaviors from honeypot data and monitor legitimate user accounts to create predictive and prescriptive models. Previously unidentified attack vectors can be cataloged for review

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    Live-Migration in Cloud Computing Environment

    Get PDF
    O tráfego global de IP aumentou cinco vezes nos últimos cinco anos, e prevê-se que crescerá três vezes nos próximos cinco. Já para o período de 2013 a 2018, anteviu-se que o total do tráfego de IP iria aumentar a sua taxa composta de crescimento anual (CAGR) em, aproximadamente, 3.9 vezes. Assim, os Prestadores de Serviços estão a sofrer com este acréscimo exponencial, que é proveniente do número abismal de dispositivos e utilizadores que estão ligados à Internet, bem como das suas exigências por vários recursos e serviços de rede (como por exemplo, distribuição de conteúdo multimédia, segurança, mobilidade, etc.). Mais especificamente, estes estão com dificuldades em: introduzir novos serviços geradores de receitas; e otimizar e adaptar as suas infraestruturas mais caras, centros de processamento de dados, e redes empresariais e de longa distância (COMpuTIN, 2015). Estas redes continuam a ter sérios problemas (no que toca a agilidade, gestão, mobilidade e no tempo despendido para se adaptarem), que não foram corrigidos até ao momento. Portanto, foram propostos novos modelos de Virtualização de Funções da Rede (NFV) e tecnologias de Redes de Software Definidos (SDN) para solucionar gastos operacionais e de capital não otimizado, e limitações das redes (Lopez, 2014, Hakiri and Berthou, 2015). Para se ultrapassar tais adversidades, o Instituto Europeu de Normas de Telecomunicações (ETSI) e outras organizações propuseram novas arquiteturas de rede. De acordo com o ETSI, a NFV é uma técnica emergente e poderosa, com grande aplicabilidade, e com o objetivo de transformar a maneira como os operadores desenham as redes. Isto é alcançado pela evolução da tecnologia padrão de virtualização TI, de forma a consolidar vários tipos de equipamentos de redes como: servidores de grande volume, routers, switches e armazenamento (Xilouris et al., 2014). Nesta dissertação, foram usadas as soluções mais atuais de SDN e NFV, de forma a produzir um caso de uso que possa solucionar o crescimento do tráfego de rede e a excedência da sua capacidade máxima. Para o desenvolvimento e avalização da solução, foi instalada a plataforma de computação na nuvem OpenStack, de modo a implementar, gerir e testar um caso de uso de Live Migration.Global IP traffic has increased fivefold over the past five years, and will continue increasing threefold over the next five years. The overall IP traffic will grow at a compound annual growth rate (CAGR) nearly 3.9-fold from 2013 to 2018. Service Providers are experiencing the exponential growth of IP traffic that comes from the incredible increased number of devices and users who are connected to the internet along with their demands for various resources and network services like multimedia content distribution, security, mobility and else. Therefore, Service Providers are finding difficult to introduce new revenue generating services, optimize and adapt their expensive infrastructures, data centers, wide-area networks and enterprise networks (COMpuTIN, 2015). The networks continue to have serious known problems, such as, agility, manageability, mobility and time-to-application that have not been successfully addressed so far. Thus, novel Network Function Virtualization (NFV) models and Software-defined Networking (SDN) technologies have been proposed to solve the non-optimal capital and operational expenditures and network’s limitations (Lopez, 2014, Hakiri and Berthou, 2015). In order to solve these issues, the European Telecommunications Standards Institute (ETSI) and other standard organizations are proposing new network architecture approaches. According to ETSI, The Network Functions Virtualization is a powerful emerging technique with widespread applicability, aiming to transform the way that network operators design networks by evolving standard IT virtualization technology to consolidate many network equipment types: high volume servers, routers, switches and storage (Xilouris et al., 2014). In this thesis, the current Software-Defined Networking (SDN) and Network Function Virtualization (NFV) solutions were used in order to make a use case that can address the increasing of network traffic and exceeding its maximum capacity. To develop and evaluate the solution, OpenStack cloud computing platform was installed in order to deploy, manage and test a Live-Migration use-case
    corecore