1,298 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    ONLINE MONITORING USING KISMET

    Get PDF
    Colleges and universities currently use online exams for student evaluation. Stu- dents can take assigned exams using their laptop computers and email their results to their instructor; this process makes testing more efficient and convenient for both students and faculty. However, taking exams while connected to the Internet opens many opportunities for plagiarism and cheating. In this project, we design, implement, and test a tool that instructors can use to monitor the online activity of students during an in-class online examination. This tool uses a wireless sniffer, Kismet, to capture and classify packets in real time. If a student attempts to access a site that is not allowed, the instructor is notified via an Android application or via Internet. Identifying a student who is cheating is challenging since many applications send packets without user intervention. We provide experimental results from realistic test environments to illustrate the success of our proposed approach

    A demonstration of VEREFOO: an automated framework for virtual firewall configuration

    Get PDF
    Nowadays, security automation exploits the agility characterizing network virtualization to replace the traditional error-prone human operations. This dynamism allows user-specified high-level intents to be rapidly refined into the concrete configuration rules which should be deployed on virtual security functions. In this revolutionary context, this paper proposes the demonstration of a novel security framework based on an optimized approach for the automatic orchestration of virtual distributed firewalls. The framework provides formal guarantees for the firewall configuration correctness and minimizes the size of the firewall allocation scheme and rule set. The framework produces rules that can be deployed on multiple types of real virtual function implementations, such as iptables, eBPF firewalls and Open vSwitch

    Automated Pattern-Based Service Deployment in Programmable Networks

    Get PDF
    This paper presents a flexible service deployment architecture for the automated, on-demand deployment of distributed services in programmable networks. The novelty of our approach is (a) the customization of the deployment protocol by utilizing modular building blocks, namely navigation patterns, aggregation patterns, and capability functions, and (b) the definition of a corresponding service descriptor. A customizable deployment protocol has several important advantages: It supports a multitude of services, and it allows for an ad hoc optimization of the protocol according to the specific needs of a service and the current network conditions. Moreover, our architecture provides an environment for studying new patterns which aim at reducing deployment latency and bandwidth for certain services. We demonstrate how the developed architecture can be used to setup a virtual private network, and we present measurements conducted with our prototype in the PlanetLab test network. Furthermore, a comparison of a distributed pattern with a centralized pattern illustrates the performance trade-off for different deployment strategie

    June-August 2005

    Get PDF

    Junos Pulse Secure Access Service Administration Guide

    Get PDF
    This guide describes basic configuration procedures for Juniper Networks Secure Access Secure Access Service. This document was formerly titled Secure Access Administration Guide. This document is now part of the Junos Pulse documentation set. This guide is designed for network administrators who are configuring and maintaining a Juniper Networks Secure Access Service device. To use this guide, you need a broad understanding of networks in general and the Internet in particular, networking principles, and network configuration. Any detailed discussion of these concepts is beyond the scope of this guide.The Juniper Networks Secure Access Service enable you to give employees, partners, and customers secure and controlled access to your corporate data and applications including file servers, Web servers, native messaging and e-mail clients, hosted servers, and more from outside your trusted network using just a Web browser. Secure Access Service provide robust security by intermediating the data that flows between external users and your company’s internal resources. Users gain authenticated access to authorized resources through an extranet session hosted by the appliance. During intermediation, Secure Access Service receives secure requests from the external, authenticated users and then makes requests to the internal resources on behalf of those users. By intermediating content in this way, Secure Access Service eliminates the need to deploy extranet toolkits in a traditional DMZ or provision a remote access VPN for employees. To access the intuitive Secure Access Service home page, your employees, partners, and customers need only a Web browser that supports SSL and an Internet connection. This page provides the window from which your users can securely browse Web or file servers, use HTML-enabled enterprise applications, start the client/server application proxy, begin a Windows, Citrix, or Telnet/SSH terminal session, access corporate e-mail servers, start a secured layer 3 tunnel, or schedule or attend a secure online meeting

    Federation of Cyber Ranges

    Get PDF
    Küberkaitse võimekuse aluselemendiks on kõrgete oskustega ja kokku treeninud spetsialistid. Tehnikute, operaatorite ja otsustajate teadlikkust ja oskusi saab treenida läbi rahvusvaheliste õppuste. On mõeldamatu, et kaitse ja rünnakute harjutamiseks kasutatakse toimivat reaalajalist organisatsiooni IT-süsteemi. Päriseluliste süsteemide simuleerimiseks on võimalik kasutada küberharjutusväljakuid.NATO ja Euroopa Liidu liikmesriikides on mitmed juba toimivad ja käimasolevad arendusprojektid uute küberharjutusväljakute loomiseks. Et olemasolevast ressurssi täies mahus kasutada, tuleks kõik sellised harjutusväljakud rahvusvaheliste õppuste tarbeks ühendada. Ühenduvus on võimalik saavutada alles pärast kokkuleppeid, tehnoloogiate ja erinevate harjutusväljakute kitsenduste arvestamist.Antud lõputöö vaatleb kahte küberharjutusväljakut ja uurib võimalusi, kuidas on võimalik rahvuslike harjutusväljakute ressursse jagada ja luua ühendatud testide ja õppuste keskkond rahvusvahelisteks küberkaitseõppusteks. Lõputöö annab soovitusi informatsiooni voogudest, testkontseptsioonidest ja eeldustest, kuidas saavutada ühendused ressursside jagamise võimekusega. Vaadeldakse erinevaid tehnoloogiad ja operatsioonilisi aspekte ning hinnatakse nende mõju.Et paremini mõista harjutusväljakute ühendamist, on üles seatud testkeskkond Eesti ja Tšehhi laborite infrastruktuuride vahel. Testiti erinevaid võrguparameetreid, operatsioone virtuaalmasinatega, virtualiseerimise tehnoloogiad ning keskkonna haldust avatud lähtekoodiga tööriistadega. Testide tulemused olid üllatavad ja positiivsed, muutes ühendatud küberharjutusväljakute kontseptsiooni saavutamise oodatust lihtsamaks.Magistritöö on kirjutatud inglise keeles ja sisaldab teksti 42 leheküljel, 7 peatükki, 12 joonist ja 4 tabelit.Võtmesõnad:Küberharjutusväljak, NATO, ühendamine, virtualiseerimine, rahvusvahelised küberkaitse õppusedAn essential element of the cyber defence capability is highly skilled and well-trained personnel. Enhancing awareness and education of technicians, operators and decision makers can be done through multinational exercises. It is unthinkable to use an operational production environment to train attack and defence of the IT system. For simulating a life like environment, a cyber range can be used. There are many emerging and operational cyber ranges in the EU and NATO. To benefit more from available resources, a federated cyber range environment for multinational cyber defence exercises can be built upon the current facilities. Federation can be achieved after agreements between nations and understanding of the technologies and limitations of different national ranges.This study compares two cyber ranges and looks into possibilities of pooling and sharing of national facilities and to the establishment of a logical federation of interconnected cyber ranges. The thesis gives recommendations on information flow, proof of concept, guide-lines and prerequisites to achieve an initial interconnection with pooling and sharing capabilities. Different technologies and operational aspects are discussed and their impact is analysed. To better understand concepts and assumptions of federation, a test environment with Estonian and Czech national cyber ranges was created. Different aspects of network parameters, virtual machine manipulations, virtualization technologies and open source administration tools were tested. Some surprising and positive outcomes were in the result of the tests, making logical federation technologically easier and more achievable than expected.The thesis is in English and contains 42 pages of text, 7 chapters, 12 figures and 4 tables.Keywords:Cyber Range, NATO, federation, virtualization, multinational cyber defence exercise

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    Dinamización de cargas de trabajo HPC/HTC: conciliando modelos “onpremise” y “cloud computing”

    Get PDF
    ABSTRACT: Cloud computing has grown tremendously in the last decade, evolving from being a mere technological concept to be considered a full business model. Some entities like companies or research groups, that have a need for computational power are beginning to consider a full migration of their systems to the cloud. However, following the trend of full migration to the cloud might not be the optimal option, perhaps, not everything is black and white and the answer could be found somewhere in between. Although great efforts are being made in the development and implementation of the so called hybrid cloud by companies that manage the biggest commercial cloud environments, namely Google, Amazon and Microsoft, most of them are focused in the creation of software developing platforms like in the case of Azure Stack from Microsoft Azure, that helps to develop hybrid applications that can be executed both locally and in the cloud. Meanwhile, the provisioning of execution environments for HPC/HTC applications seems that to be relegated to the background. In part, this could be because currently there is a low demand for these environments. This low demand can be motivated by many factors among which it is worth highlighting the necessity of having really specialised hardware, the overhead introduced by virtualization and last, but not least, the economic impact usually associated to this kind of customized infrastructures in contrast with more standard ones. With these limitations in mind and the fact that, in most of the cases, complete migration to the cloud is limited by the previous existence of a local infrastructure that provides computing and storage resources, this thesis explores an intermediate path between on-premise (local) and cloud computing. This kind of solution will allow an HPC/HTC user to benefit from the cloud schema in a transparent way, maintaining the on-premise environment that he is so used with, and also being able to execute jobs in both paradigms. To achieve this, the Hybrid-Infrastructure-as-a-Service Manager (HIaaS-M) framework is created. This framework tries to join both computing paradigms by automating the interaction between them in a way that is eficient and completely transparent to the user. This framework is especially design to be integrated into already existing infrastructures (on-premise); in other words, without the need of changing any of the existing software pieces. The framework is a standalone software that communicates with the existing systems, minimizing the impact that changing the base software and/or infrastructure of an entity could cause. This document describes the whole development process of this modular and configurable framework which allows the integration of previously existing infrastructures with one created in the cloud through a cloud infrastructure provider, adding the alternative of executing jobs in any of the cloud environments provided by cloud providers thanks to the Apache Libcloud library. This document concludes with a proof of concept made over a development cluster (called "cluster2") hosted in the 3MARES Data Processing Center at the Science Faculty of the University of Cantabria. This deployment on a similar to real life environment has allowed to identify the main advantages of the framework, as well as improvement that could be made that are expressed in a suggested roadmap for future work.RESUMEN: Con el enorme crecimiento que ha experimentado la computación en la nube durante la última década, evolucionando desde un concepto tecnológico a ser considerada un modelo de negocio completo, las entidades que demandan recursos computacionales se empiezan a plantear la migración completa de estos recursos a la nube. Sin embargo, seguir esta tendencia de tener todo en la nube puede no ser necesariamente la mejor opción y, quizás, como en muchas otras cosas, la respuesta esté en algo intermedio. Si bien actualmente las principales compañías que gestionan los grandes entornos cloud comerciales, como son Google, Microsoft o Amazon, están llevando a cabo grandes esfuerzos en el desarrollo e implementación de la llamada nube híbrida, estos concentran principalmente su atención en la evolución de plataformas para desarrollo de software, como ocurre por ejemplo en el caso de Microsoft Azure, con su producto Azure Stack, destinado al desarrollo y ejecución de aplicaciones híbridas (que pueden ejecutar tanto on-premise como en cloud), mientras que los entornos para la ejecución de aplicaciones HPC/HTC, parece que quedan relegados a un segundo plano. Su baja demanda actual viene motivada por diversas causas, entre las que caben destacar la necesidad de hardware muy específico (en muchas ocasiones de muy altas prestaciones), los problemas derivados del overhead creado por la virtualización, y no menos importante, el factor económico, ya que este tipo de infraestructuras personalizadas pueden alcanzar un precio mucho mayor que las de configuración más estándar. Sabiendo esto y teniendo en cuenta que, en la inmensa mayoría de casos, migrar completamente a la nube puede ser desaconsejable debido entre otras cosas, a la existencia previa de una infraestructura local que provee de los recursos necesarios, en cuanto a computación y almacenamiento se refiere, este trabajo pretende explorar una solución a medio camino entre la computación on-premise y la computación en la nube, o también llamada cloud-computing, que permita a un usuario HPC/HTC beneficiarse de la computación en la nube sin prescindir del entorno on-premise al que está acostumbrado, siendo capaz de ejecutar trabajos en ambos entornos. Para ello, se ha desarrollado el framework Hybrid-Infrastructure-as-a-Service Manager (HIaaS-M), que pretende conciliar los dos paradigmas, automatizando la interacción entre ellos de forma eficiente y completamente transparente para el usuario. Este framework está especialmente diseñado para su integración en infraestructuras ya existentes (onpremise) de forma también transparente, es decir, sin necesidad de modificar ninguna pieza software. Su ejecución se realiza de manera independiente, como un programa autónomo, que se comunica con los sistemas existentes, minimizando así el impacto que puedan suponer posibles cambios en las piezas software que componen la infraestructura donde se vaya a implantar. A lo largo de esta memoria se describe el proceso completo de desarrollo de este framework, modular y configurable, el cual permite la integración de una infraestructura computacional existente con la proporcionada por un entorno cloud, añadiendo la posibilidad de ejecutar trabajos en prácticamente cualquiera de los entornos cloud apoyándose fundamentalmente en el uso de la librería Libcloud. El trabajo culmina con una prueba de concepto realizada sobre el cluster en desarrollo (de nombre cluster2) ubicado en el CPD 3Mares de la Facultad de Ciencias de la Universidad de Cantabria. Este último paso nos ha permitido concluir el trabajo identificando las ventajas del framework así como algunas consideraciones a tener en cuenta para trabajos futuros.Máster en Ingeniería Informátic
    corecore