1,124 research outputs found
Advanced Message Routing for Scalable Distributed Simulations
The Joint Forces Command (JFCOM) Experimentation Directorate (J9)'s recent Joint Urban Operations (JUO)
experiments have demonstrated the viability of Forces Modeling and Simulation in a distributed environment. The
JSAF application suite, combined with the RTI-s communications system, provides the ability to run distributed
simulations with sites located across the United States, from Norfolk, Virginia to Maui, Hawaii. Interest-aware
routers are essential for communications in the large, distributed environments, and the current RTI-s framework
provides such routers connected in a straightforward tree topology. This approach is successful for small to medium
sized simulations, but faces a number of significant limitations for very large simulations over high-latency, wide
area networks. In particular, traffic is forced through a single site, drastically increasing distances messages must
travel to sites not near the top of the tree. Aggregate bandwidth is limited to the bandwidth of the site hosting the
top router, and failures in the upper levels of the router tree can result in widespread communications losses
throughout the system.
To resolve these issues, this work extends the RTI-s software router infrastructure to accommodate more
sophisticated, general router topologies, including both the existing tree framework and a new generalization of the
fully connected mesh topologies used in the SF Express ModSAF simulations of 100K fully interacting vehicles.
The new software router objects incorporate the scalable features of the SF Express design, while optionally using
low-level RTI-s objects to perform actual site-to-site communications. The (substantial) limitations of the original
mesh router formalism have been eliminated, allowing fully dynamic operations. The mesh topology capabilities
allow aggregate bandwidth and site-to-site latencies to match actual network performance. The heavy resource load at
the root node can now be distributed across routers at the participating sites
Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results
Fixed and mobile telecom operators, enterprise network operators and cloud
providers strive to face the challenging demands coming from the evolution of
IP networks (e.g. huge bandwidth requirements, integration of billions of
devices and millions of services in the cloud). Proposed in the early 2010s,
Segment Routing (SR) architecture helps face these challenging demands, and it
is currently being adopted and deployed. SR architecture is based on the
concept of source routing and has interesting scalability properties, as it
dramatically reduces the amount of state information to be configured in the
core nodes to support complex services. SR architecture was first implemented
with the MPLS dataplane and then, quite recently, with the IPv6 dataplane
(SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering
of packets across nodes to a general network programming approach, making it
very suitable for use cases such as Service Function Chaining and Network
Function Virtualization. In this paper we present a tutorial and a
comprehensive survey on SR technology, analyzing standardization efforts,
patents, research activities and implementation results. We start with an
introduction on the motivations for Segment Routing and an overview of its
evolution and standardization. Then, we provide a tutorial on Segment Routing
technology, with a focus on the novel SRv6 solution. We discuss the
standardization efforts and the patents providing details on the most important
documents and mentioning other ongoing activities. We then thoroughly analyze
research activities according to a taxonomy. We have identified 8 main
categories during our analysis of the current state of play: Monitoring,
Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path
Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL
\STATMOND: A Peer-To-Peer Status And Performance Monitor For Dynamic Resource Allocation On Parallel Computers
This thesis presents a decentralized tool STATMOND - to monitor the status of a peer-to-peer network. STATMOND provides an accurate measurement scheme for parameters such as CPU load and memory utilization on Linux clusters. The services of STATMOND are ubiquitous in that each computer measures and for- wards its data over the network and also maintains the data of other nodes in memory. The data are periodically updated, and users on any node can ‘see‘ the status and performance of the network based on these parameters. This thesis describes the problems confronting cluster computing, the necessity of monitoring tools and how STATMOND can be a step towards better allocation of resources for dynamic computing
Network emulation focusing on QoS-Oriented satellite communication
This chapter proposes network emulation basics and a complete case study of QoS-oriented Satellite Communication
Data Movement Challenges and Solutions with Software Defined Networking
With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS “agents†to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies
A SECURITY-CENTRIC APPLICATION OF PRECISION TIME PROTOCOL WITHIN ICS/SCADA SYSTEMS
Industrial Control System and Supervisory Control and Data Acquisition (ICS/SCADA) systems are key pieces of larger infrastructure that are responsible for safely operating transportation, industrial operations, and military equipment, among many other applications. ICS/SCADA systems rely on precise timing and clear communication paths between control elements and sensors. Because ICS/SCADA system designs place a premium on timeliness and availability of data, security ended up as an afterthought, stacked on top of existing (insecure) protocols. As precise timing is already resident and inherent in most ICS/SCADA systems, a unique opportunity is presented to leverage existing technology to potentially enhance the security of these systems. This research seeks to evaluate the utility of timing as a mechanism to mitigate certain types of malicious cyber-based operations such as a man-on-the-side (MotS) attack. By building a functioning ICS/SCADA system and communication loop that incorporates precise timing strategies in the reporting and control loop, specifically the precision time protocol (PTP), it was shown that certain kinds of MotS attacks can be mitigated by leveraging precise timing.Navy Cyber Warfare Development Group, Suitland, MDLieutenant, United States NavyApproved for public release. Distribution is unlimited
Recommended from our members
Heterogeneous Cloud Systems Based on Broadband Embedded Computing
Computing systems continue to evolve from homogeneous systems of commodity-based servers within a single data-center towards modern Cloud systems that consist of numerous data-center clusters virtualized at the infrastructure and application layers to provide scalable, cost-effective and elastic services to devices connected over the Internet. There is an emerging trend towards heterogeneous Cloud systems driven from growth in wired as well as wireless devices that incorporate the potential of millions, and soon billions, of embedded devices enabling new forms of computation and service delivery. Service providers such as broadband cable operators continue to contribute towards this expansion with growing Cloud system infrastructures combined with deployments of increasingly powerful embedded devices across broadband networks. Broadband networks enable access to service provider Cloud data-centers and the Internet from numerous devices. These include home computers, smart-phones, tablets, game-consoles, sensor-networks, and set-top box devices. With these trends in mind, I propose the concept of broadband embedded computing as the utilization of a broadband network of embedded devices for collective computation in conjunction with centralized Cloud infrastructures. I claim that this form of distributed computing results in a new class of heterogeneous Cloud systems, service delivery and application enablement. To support these claims, I present a collection of research contributions in adapting distributed software platforms that include MPI and MapReduce to support simultaneous application execution across centralized data-center blade servers and resource-constrained embedded devices. Leveraging these contributions, I develop two complete prototype system implementations to demonstrate an architecture for heterogeneous Cloud systems based on broadband embedded computing. Each system is validated by executing experiments with applications taken from bioinformatics and image processing as well as communication and computational benchmarks. This vision, however, is not without challenges. The questions on how to adapt standard distributed computing paradigms such as MPI and MapReduce for implementation on potentially resource-constrained embedded devices, and how to adapt cluster computing runtime environments to enable heterogeneous process execution across millions of devices remain open-ended. This dissertation presents methods to begin addressing these open-ended questions through the development and testing of both experimental broadband embedded computing systems and in-depth characterization of broadband network behavior. I present experimental results and comparative analysis that offer potential solutions for optimal scalability and performance for constructing broadband embedded computing systems. I also present a number of contributions enabling practical implementation of both heterogeneous Cloud systems and novel application services based on broadband embedded computing
Real-time stream processing in radio astronomy
A major challenge in modern radio astronomy is dealing with the massive data
volumes generated by wide-bandwidth receivers. Such massive data rates are
often too great for a single device to cope, and so processing must be split
across multiple devices working in parallel. These devices must work in unison
to process incoming data in real time, reduce the data volume to a manageable
size, and output a science-ready data product. The aim of this chapter is to
give a broad overview of how digital systems for radio telescopes are commonly
implemented, with a focus on real-time stream processing over multiple compute
devices.Comment: Chapter to appear in "Big Data in Radio Astronomy: Scientific Data
Processing for Advanced Radio Telescopes
Design and implementation of a belief-propagation scheduler for multicast traffic in input-queued switches
Scheduling multicast traffic in input-queued switches to maximize throughput requires solving a hard combinatorial optimization problem in a very short time. This task advocates the design of algorithms that are simple to implement and efficient in terms of performance. We propose a new scheduling algorithm, based on message passing and inspired by the belief propagation paradigm, meant to approximate the provably-optimal scheduling policy for multicast traffic. We design and implement both a software and a hardware version of the algorithm, the latter running on a NetFPGA. We compare the performance and the power consumption of the two versions when integrated in a software router. Our main findings are that our algorithm outperforms other centralized greedy scheduling policies, achieving a better tradeoff between complexity and performance, and it is amenable to practical high-performance implementations
- …