325 research outputs found

    Challenges Using the Linux Network Stack for Real-Time Communication

    Get PDF
    Starting in the early 2000s, human-in-the-loop (HITL) simulation groups at NASA and the Air Force Research Lab began using the Linux network stack for some real-time communication. More recently, SpaceX has adopted Ethernet as the primary bus technology for its Falcon launch vehicles and Dragon capsules. As the Linux network stack makes its way from ground facilities to flight critical systems, it is necessary to recognize that the network stack is optimized for communication over the open Internet, which cannot provide latency guarantees. The Internet protocols and their implementation in the Linux network stack contain numerous design decisions that favor throughput over determinism and latency. These decisions often require workarounds in the application or customization of the stack to maintain a high probability of low latency on closed networks, especially if the network must be fault tolerant to single event upsets

    From photons to big-data applications: terminating terabits

    Get PDF
    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.This work was supported by the UK Engineering and Physical Sciences Research Council Internet Project EP/H040536/1. This work was supported by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, under contract FA8750-11-C-0249

    Virtualization of network I/O on modern operating systems

    Get PDF
    Network I/O of modern operating systems is incomplete. In this networkage, users and their applications are still unable to control theirown traffic, even on their local host. Network I/O is a sharedresource of a host machine, and traditionally, to address problemswith a shared resource, system research has virtualized the resource.Therefore, it is reasonable to ask if the virtualization can providesolutions to problems in network I/O of modern operating systems, inthe same way as the other components of computer systems, such asmemory and CPU. With the aim of establishing the virtualization ofnetwork I/O as a design principle of operating systems, thisdissertation first presents a virtualization model, hierarchicalvirtualization of network interface. Systematic evaluation illustratesthat the virtualization model possesses desirable properties forvirtualization of network I/O, namely flexible control granularity,resource protection, partitioning of resource consumption, properaccess control and generality as a control model. The implementedprototype exhibits practical performance with expected functionality,and allowed flexible and dynamic network control by users andapplications, unlike existing systems designed solely for systemadministrators. However, because the implementation was hardcoded inkernel source code, the prototype was not perfect in its functionalcoverage and flexibility. Accordingly, this dissertation investigatedhow to decouple OS kernels and packet processing code throughvirtualization, and studied three degrees of code virtualization,namely, limited virtualization, partial virtualization, and completevirtualization. In this process, a novel programming model waspresented, based on embedded Java technology, and the prototypeimplementation exhibited the following characteristics, which aredesirable for network code virtualization. First, users program inJava to carry out safe and simple programming for packetprocessing. Second, anyone, even untrusted applications, can performinjection of packet processing code in the kernel, due to isolation ofcode execution. Third, the prototype implementation empirically provedthat such a virtualization does not jeopardize system performance.These cases illustrate advantages of virtualization, and suggest thatthe hierarchical virtualization of network interfaces can be aneffective solution to problems in network I/O of modern operatingsystems, both in the control model and in implementation

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
    • …
    corecore