97 research outputs found

    Optical fibre local area networks

    Get PDF

    Experience with fibre channel in the environment of the ATLAS DAQ protoype "-1" project

    Get PDF
    Fibre Channel equipment has been evaluated in the environment of the ATLAS DAQ prototype "-1". Fibre Channel PCI and PMC cards have been tested on PwerPC-based VME processor boards running LynxOS and on Pentium-based personal computers running Windows NT. The performance in terms of overhead and bandwidth has been measured in point-to-point, arbitrated loop and fabric configuration with a Fibre Ch annel switch. The possible used of the equipment for event building in the ATLAS DAQ prototype "-1" has been studied

    Diseño De Red De Comunicación De Datos Para La Institución Educativa Privada Emilio Soyer Cabero Ubicado En El Distrito De Chorrillos, Lima, Perú

    Get PDF
    El presente trabajo de investigación lleva por título “DISEÑO DE RED DE COMUNICACIÓN DE DATOS PARA LA INSTITUCIÓN EDUCATIVA PRIVADA EMILIO SOYER CABERO UBICADA EN EL DISTRITO DE CHORRILLOS, LIMA, PERÚ”, para optar el título de Ingeniero Electrónico y Telecomunicaciones, presentado por el alumno Jhaset Raúl Ortega Cubas. En primer lugar se aborda la realidad problemática observada relacionada con la importancia y necesidad de diseñar una Red de Comunicación de Datos con el fin de dotar a la Institución Educativa Privada Emilio Soyer Cabero de un sistema de transmisión de información mediante la comunicación de todos los dispositivos de red que ésta maneje para ventaja de los trabajadores, docentes y alumnos. La estructura que hemos seguido en este proyecto se compone de 3 capítulos. El primer capítulo comprende el planteamiento del problema, el segundo capítulo el desarrollo del marco teórico y el tercer capítulo corresponde al desarrollo del diseño

    Service management for multi-domain Active Networks

    Get PDF
    The Internet is an example of a multi-agent system. In our context, an agent is synonymous with network operators, Internet service providers (ISPs) and content providers. ISPs mutually interact for connectivity's sake, but the fact remains that two peering agents are inevitably self-interested. Egoistic behaviour manifests itself in two ways. Firstly, the ISPs are able to act in an environment where different ISPs would have different spheres of influence, in the sense that they will have control and management responsibilities over different parts of the environment. On the other hand, contention occurs when an ISP intends to sell resources to another, which gives rise to at least two of its customers sharing (hence contending for) a common transport medium. The multi-agent interaction was analysed by simulating a game theoretic approach and the alignment of dominant strategies adopted by agents with evolving traits were abstracted. In particular, the contention for network resources is arbitrated such that a self-policing environment may emerge from a congested bottleneck. Over the past 5 years, larger ISPs have simply peddled as fast as they could to meet the growing demand for bandwidth by throwing bandwidth at congestion problems. Today, the dire financial positions of Worldcom and Global Crossing illustrate, to a certain degree, the fallacies of over-provisioning network resources. The proposed framework in this thesis enables subscribers of an ISP to monitor and police each other's traffic in order to establish a well-behaved norm in utilising limited resources. This framework can be expanded to other inter-domain bottlenecks within the Internet. One of the main objectives of this thesis is also to investigate the impact on multi-domain service management in the future Internet, where active nodes could potentially be located amongst traditional passive routers. The advent of Active Networking technology necessitates node-level computational resource allocations, in addition to prevailing resource reservation approaches for communication bandwidth. Our motivation is to ensure that a service negotiation protocol takes account of these resources so that the response to a specific service deployment request from the end-user is consistent and predictable. To promote the acceleration of service deployment by means of Active Networking technology, a pricing model is also evaluated for computational resources (e.g., CPU time and memory). Previous work in these areas of research only concentrate on bandwidth (i.e., communication) - related resources. Our pricing approach takes account of both guaranteed and best-effort service by adapting the arbitrage theorem from financial theory. The central tenet for our approach is to synthesise insights from different disciplines to address problems in data networks. The greater parts of research experience have been obtained during direct and indirect participation in the 1ST-10561 project known as FAIN (Future Active IP Networks) and ACTS-AC338 project called MIAMI (Mobile Intelligent Agent for Managing the Information Infrastructure). The Inter-domain Manager (IDM) component was integrated as an integral part of the FAIN policy-based network management systems (PBNM). Its monitoring component (developed during the MIAMI project) learns about routing changes that occur within a domain so that the management system and the managed nodes have the same topological view of the network. This enabled our reservation mechanism to reserve resources along the existing route set up by whichever underlying routing protocol is in place

    Energy reconstruction on the LHC ATLAS TileCal upgraded front end: feasibility study for a sROD co-processing unit

    Get PDF
    Dissertation presented in ful lment of the requirements for the degree of: Master of Science in Physics 2016The Phase-II upgrade of the Large Hadron Collider at CERN in the early 2020s will enable an order of magnitude increase in the data produced, unlocking the potential for new physics discoveries. In the ATLAS detector, the upgraded Hadronic Tile Calorimeter (TileCal) Phase-II front end read out system is currently being prototyped to handle a total data throughput of 5.1 TB/s, from the current 20.4 GB/s. The FPGA based Super Read Out Driver (sROD) prototype must perform an energy reconstruction algorithm on 2.88 GB/s raw data, or 275 million events per second. Due to the very high level of pro ciency required and time consuming nature of FPGA rmware development, it may be more e ective to implement certain complex energy reconstruction and monitoring algorithms on a general purpose, CPU based sROD co-processor. Hence, the feasibility of a general purpose ARM System on Chip based co-processing unit (PU) for the sROD is determined in this work. A PCI-Express test platform was designed and constructed to link two ARM Cortex-A9 SoCs via their PCI-Express Gen-2 x1 interfaces. Test results indicate that the latency of the PCI-Express interface is su ciently low and the data throughput is superior to that of alternative interfaces such as Ethernet, for use as an interconnect for the SoCs to the sROD. CPU performance benchmarks were performed on ve ARM development platforms to determine the CPU integer, oating point and memory system performance as well as energy e ciency. To complement the benchmarks, Fast Fourier Transform and Optimal Filtering (OF) applications were also tested. Based on the test results, in order for the PU to process 275 million events per second with OF, within the 6 s timing budget of the ATLAS triggering system, a cluster of three Tegra-K1, Cortex-A15 SoCs connected to the sROD via a Gen-2 x8 PCI-Express interface would be suitable. A high level design for the PU is proposed which surpasses the requirements for the sROD co-processor and can also be used in a general purpose, high data throughput system, with 80 Gb/s Ethernet and 15 GB/s PCI-Express throughput, using four X-Gene SoCs

    A study into scalable transport networks for IoT deployment

    Get PDF
    The growth of the internet towards the Internet of Things (IoT) has impacted the way we live. Intelligent (smart) devices which can act autonomously has resulted in new applications for example industrial automation, smart healthcare systems, autonomous transportation to name just a few. These applications have dramatically improved the way we live as citizens. While the internet is continuing to grow at an unprecedented rate, this has also been coupled with the growing demands for new services e.g. machine-to machine (M2M) communications, smart metering etc. Transmission Control Protocol/Internet Protocol (TCP/IP) architecture was developed decades ago and was not prepared nor designed to meet these exponential demands. This has led to the complexity of the internet coupled with its inflexible and a rigid state. The challenges of reliability, scalability, interoperability, inflexibility and vendor lock-in amongst the many challenges still remain a concern over the existing (traditional) networks. In this study, an evolutionary approach into implementing a "Scalable IoT Data Transmission Network" (S-IoT-N) is proposed while leveraging on existing transport networks. Most Importantly, the proposed evolutionary approach attempts to address the above challenges by using open (existing) standards and by leveraging on the (traditional/existing) transport networks. The Proof-of-Concept (PoC) of the proposed S-IoT-N is attempted on a physical network testbed and is demonstrated along with basic network connectivity services over it. Finally, the results are validated by an experimental performance evaluation of the PoC physical network testbed along with the recommendations for improvement and future work

    A Fully Userspace Remote Storage Access Stack

    Get PDF
    As computer networking has evolved and the available throughput has increased, the efficiency of the network software stack has become increasingly important. This is because the latency introduced by software has gone from insignificant, compared to historically poor network performance, to the largest component of latency for a modern local-area network. Currently, the vast majority of code that accesses the hardware is part of the kernel, because the kernel is responsible for ensuring that user applications do not interfere with each other when accessing the hardware. Remote Direct Memory Access~(RDMA) provides a solution for applications to perform direct data transfers over the network without requiring context switches into the kernel, but relies instead on specialized hardware interfaces to handle the virtual address mappings and transport protocols. This more intelligent hardware allows for direct control from the userspace application, eliminating the cost of context switches into the kernel. This in turn reduces the overall latency of message transfers. Just like networking, storage is currently undergoing a similar evolution. For most of the recent history of computing, the most common durable storage mechanism has been mechanical hard disk drives, which can only be accessed at block level and have high latency compared to the software drivers used to access the data. However, the introduction of solid state disks~(SSDs) based on Flash significantly decreased the latency, as there are no mechanical parts that need to move to access the data. Upcoming non-volatile memory solutions reduce this latency even further, and even allow byte-level access to the storage medium. Thus, just like with networking, software drivers become the bottleneck and we look for solutions to bypass the kernel to improve the efficiency of direct userspace access to storage. This thesis offers two contributions as part of a solution to these problems. The first part introduces urdma, a software RDMA driver which leverages the Data Plane Development Kit (DPDK) to perform network data transfers in userspace without specialized RDMA interface hardware. The second part examines remote locking protocols, which are required for synchronization in distributed storage systems. We define an RDMA locking mechanism referred to as Verbs Offload Locking Technology (VOLT), which allows acquisition of a remote lock object without any CPU usage by the target node. This offloading allows VOLT to be used with disaggregated memory servers that have limited onboard CPU resources, while also lowering the application overhead for remote locking. Finally, we define a bytecode framework using enhanced Berkeley Packet Filter (eBPF) bytecode for extending the capabilities of an RDMA-capable network interface card (NIC) with new operations, and show how this can be used to implement our remote locking operation

    Primary schools building handbook. Section 3a

    Get PDF
    "This document provides the general design requirements and room inter-relationships applicable to the design of primary schools." - introduction
    corecore