3,736 research outputs found

    The Use of Firewalls in an Academic Environment

    No full text

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Autonomous space processor for orbital debris

    Get PDF
    Advanced designs are being continued to develop the ultimate goal of a GETAWAY special to demonstrate economical removal of orbital debris utilizing local resources in orbit. The fundamental technical feasibility was demonstrated in 1988 through theoretical calculations, quantitative computer animation, a solar focal point cutter, a robotic arm design and a subcase model. Last year improvements were made to the solar cutter and the robotic arm. Also performed last year was a mission analysis which showed the feasibility of retrieve at least four large (greater than 1500 kg) pieces of debris. Advances made during this reporting period are the incorporation of digital control with the existing placement arm, the development of a new robotic manipulator arm, and the study of debris spin attenuation. These advances are discussed

    Hotspot in UTP

    Get PDF
    In today's world, it is necessary for organizations especially educational institution to keep up with the latest trend in technology. Technology changes practically everyday and it is important for these organizations to explore the world of technology. The latest hit on the market will be the Wi-Fi (wireless fidelity) fewer. Hotspot is created using this Wi-Fi technology. Hotspot is a specific geographic location in which an access point provides public wireless broadband network service to mobile visitors through a WLAN. The main objective of this project is to do a research on hotspot and establish a wireless environment to support the theories provided in this documentation. This facility is provided for those with laptops and PDAs with wireless PC Card attached to them. Their machines will be free from any cables or wires connecting to the network port. They are basically mobile users and wireless LAN is ease to use as special terminal or OS is not required to access the network. In the methodology chapter, author discuses on the stages of development cycle used in conducting the study and research on the project. The results and findings discuses on authors conclusions and research diversion regarding the effort on establishing a hotspot in UTP. Author indicates the succession of the research in forming a wireless environment and the factors involved in setting up the Hotspot. The conclusion discusses on authors achievement in implementing hotspot and also the future upgrades suggested by author herself in order to improvise the research study in future

    Descoberta de serviços independentes do acesso para redes heterogéneas

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaA recente proliferação de nós móveis com múltiplas interfaces sem fios e a constituição de ambientes heterogéneos possibilitaram a criação de cenários complexos onde os operadores de rede necessitam de disponibilizar conectividade para diferentes tipos de redes de acesso. Assim, a norma IEEE 802.21 foi especificada de forma a facilitar e optimizar os procedimentos de handover entre diferentes tecnologias de acesso sem perda de conectividade. Para cumprir o seu propósito, a norma disponibiliza serviços chamados Media Independent Handover e que permitem o controlo e a obtenção de informação de diferentes ligações. A configuração estática destes serviços por parte do nó móvel torna-se ineficiente devido aos múltiplos cenários possíveis. Desta forma, o nó móvel deve descobrir nós da rede que providenciem serviços de mobilidade e as suas capacidade de uma forma dinâmica. Nesta dissertação, um conjunto de mecanismos para descoberta de serviços de handover independentes do acesso são analisados, implementados e avaliados em termos de duração e quantidade de informação trocada. Um novo mecanismo de descoberta de entidades locais é também proposto e avaliado, demonstrando que a sua utilização aumenta o desempenho e requer a troca de menos quantidade de informação.The recent proliferation of mobile nodes with multiple wireless interfaces, in addition to the creation of heterogeneous environments, created complex scenarios where network operators need to provide connectivity for di erent kinds of access networks. Therefore, the IEEE 802.21 standard has been speci ed to facilitate and optimize handover procedures between di erent access technologies in a seamless way. To ful l its purpose, it provides Media Independent Handover services which allow the control and gathering of information from di erent links. The static con guration of these services by the MN becomes ine cient due to the amount of possible scenarios. Thus, the MN must discover the network-supporting nodes and their capabilities in a dynamic way. In this work, a series of proposed Media Independent Handover discovery procedures are analyzed, implemented and evaluated in terms of duration and amount of exchanged information. In addition, a novel discovery procedure for local entities is proposed and evaluated, showing that its deployment increases the performance and requires less information exchanged

    Correlating IPv6 addresses for network situational awareness

    Get PDF
    The advent of the IPv6 protocol on enterprise networks provides fresh challenges to network incident investigators. Unlike the conventional behavior and implementation of its predecessor, the typical deployment of IPv6 presents issues with address generation (host-based autoconfiguration rather than centralized distribution), address multiplicity (multiple addresses per host simultaneously), and address volatility (randomization and frequent rotation of host identifiers). These factors make it difficult for an investigator, when reviewing a log file or packet capture ex post facto, to both identify the origin of a particular log entry/packet and identify all log entries/packets related to a specific network entity (since multiple addresses may have been used). I have demonstrated a system, titled IPv6 Address Correlator (IPAC), that allows incident investigators to match both a specific IPv6 address to a network entity (identified by its MAC address and the physical switch port to which it is attached) and a specific entity to a set of IPv6 addresses in use within an organization\u27s networks at any given point in time. This system relies on the normal operation of the Neighbor Discovery Protocol for IPv6 (NDP) and bridge forwarding table notifications from Ethernet switches to keep a record of IPv6 and MAC address usage over time. With this information, it is possible to pair each IPv6 address to a MAC address and each MAC address to a physical switch port. When the IPAC system is deployed throughout an organization\u27s networks, aggregated IPv6 and MAC addressing timeline information can be used to identify which host caused an entry in a log file or sent/received a captured packet, as well as correlate all packets or log entries related to a given host

    Linking session based services with transport plane resources in IP multimedia subsystems.

    Get PDF
    The massive success and proliferation of Internet technologies has forced network operators to recognise the benefits of an IP-based communications framework. The IP Multimedia Subsystem (IMS) has been proposed as a candidate technology to provide a non-disruptive strategy in the move to all-IP and to facilitate the true convergence of data and real-time multimedia services. Despite the obvious advantages of creating a controlled environment for deploying IP services, and hence increasing the value of the telco bundle, there are several challenges that face IMS deployment. The most critical is that posed by the widespread proliferation ofWeb 2.0 services. This environment is not seen as robust enough to be used by network operators for revenue generating services. However IMS operators will need to justify charging for services that are typically available free of charge in the Internet space. Reliability and guaranteed transport of multimedia services by the efficient management of resources will be critical to differentiate IMS services. This thesis investigates resource management within the IMS framework. The standardisation of NGN/IMS resource management frameworks has been fragmented, resulting in weak functional and interface specifications. To facilitate more coherent, focused research and address interoperability concerns that could hamper deployment, a Common Policy and Charging Control (PCC) architecture is presented that defines a set of generic terms and functional elements. A review of related literature and standardisation reveals severe shortcomings regarding vertical and horizontal coordination of resources in the IMS framework. The deployment of new services should not require QoS standardisation or network upgrade, though in the current architecture advanced multimedia services are not catered for. It has been found that end-to-end QoS mechanisms in the Common PCC framework are elementary. To address these challenges and assist network operators when formulating their iii NGN strategies, this thesis proposes an application driven policy control architecture that incorporates end-user and service requirements into the QoS negotiation procedure. This architecture facilitates full interaction between service control and resource control planes, and between application developers and the policies that govern resource control. Furthermore, a novel, session based end-to-end policy control architecture is proposed to support inter-domain coordination across IMS domains. This architecture uses SIP inherent routing information to discover the routes traversed by the signalling and the associated routes traversed by the media. This mechanism effectively allows applications to issue resource requests from their home domain and enable end-to-end QoS connectivity across all traversed transport segments. Standard interfaces are used and transport plane overhaul is not necessary for this functionality. The Common PCC, application driven and session based end-to-end architectures are implemented in a standards compliant and entirely open source practical testbed. This demonstrates proof of concept and provides a platform for performance evaluations. It has been found that while there is a cost in delay and traffic overhead when implementing the complete architecture, this cost falls within established criteria and will have an acceptable effect on end-user experience. The open nature of the practical testbed ensures that all evaluations are fully reproducible and provides a convenient point of departure for future work. While it is important to leave room for flexibility and vendor innovation, it is critical that the harmonisation of NGN/IMS resource management frameworks takes place and that the architectures proposed in this thesis be further developed and integrated into the single set of specifications. The alternative is general interoperability issues that could render end-to-end QoS provisioning for advanced multimedia services almost impossible

    Robot graphic simulation testbed

    Get PDF
    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts

    AUTOMATED NETWORK SECURITY WITH EXCEPTIONS USING SDN

    Get PDF
    Campus networks have recently experienced a proliferation of devices ranging from personal use devices (e.g. smartphones, laptops, tablets), to special-purpose network equipment (e.g. firewalls, network address translation boxes, network caches, load balancers, virtual private network servers, and authentication servers), as well as special-purpose systems (badge readers, IP phones, cameras, location trackers, etc.). To establish directives and regulations regarding the ways in which these heterogeneous systems are allowed to interact with each other and the network infrastructure, organizations typically appoint policy writing committees (PWCs) to create acceptable use policy (AUP) documents describing the rules and behavioral guidelines that all campus network interactions must abide by. While users are the audience for AUP documents produced by an organization\u27s PWC, network administrators are the responsible party enforcing the contents of such policies using low-level CLI instructions and configuration files that are typically difficult to understand and are almost impossible to show that they do, in fact, enforce the AUPs. In other words, mapping the contents of imprecise unstructured sentences into technical configurations is a challenging task that relies on the interpretation and expertise of the network operator carrying out the policy enforcement. Moreover, there are multiple places where policy enforcement can take place. For example, policies governing servers (e.g., web, mail, and file servers) are often encoded into the server\u27s configuration files. However, from a security perspective, conflating policy enforcement with server configuration is a dangerous practice because minor server misconfigurations could open up avenues for security exploits. On the other hand, policies that are enforced in the network tend to rarely change over time and are often based on one-size-fits-all policies that can severely limit the fast-paced dynamics of emerging research workflows found in campus networks. This dissertation addresses the above problems by leveraging recent advances in Software-Defined Networking (SDN) to support systems that enable novel in-network approaches developed to support an organization\u27s network security policies. Namely, we introduce PoLanCO, a human-readable yet technically-precise policy language that serves as a middle-ground between the imprecise statements found in AUPs and the technical low-level mechanisms used to implement them. Real-world examples show that PoLanCO is capable of implementing a wide range of policies found in campus networks. In addition, we also present the concept of Network Security Caps, an enforcement layer that separates server/device functionality from policy enforcement. A Network Security Cap intercepts packets coming from, and going to, servers and ensures policy compliance before allowing network devices to process packets using the traditional forwarding mechanisms. Lastly, we propose the on-demand security exceptions model to cope with the dynamics of emerging research workflows that are not suited for a one-size-fits-all security approach. In the proposed model, network users and providers establish trust relationships that can be used to temporarily bypass the policy compliance checks applied to general-purpose traffic -- typically by network appliances that perform Deep Packet Inspection, thereby creating network bottlenecks. We describe the components of a prototype exception system as well as experiments showing that through short-lived exceptions researchers can realize significant improvements for their special-purpose traffic
    • …
    corecore