2,223 research outputs found

    Test, Control and Monitor System (TCMS) operations plan

    Get PDF
    The purpose is to provide a clear understanding of the Test, Control and Monitor System (TCMS) operating environment and to describe the method of operations for TCMS. TCMS is a complex and sophisticated checkout system focused on support of the Space Station Freedom Program (SSFP) and related activities. An understanding of the TCMS operating environment is provided and operational responsibilities are defined. NASA and the Payload Ground Operations Contractor (PGOC) will use it as a guide to manage the operation of the TCMS computer systems and associated networks and workstations. All TCMS operational functions are examined. Other plans and detailed operating procedures relating to an individual operational function are referenced within this plan. This plan augments existing Technical Support Management Directives (TSMD's), Standard Practices, and other management documentation which will be followed where applicable

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    Real-Time Energy Price-Aware Anycast RWA for Scheduled Lightpath Demands in Optical Data Center Networks

    Get PDF
    The energy consumption of the data center networks and the power consumption associated with transporting data to the users is considerably large, and it constitutes a significant portion of their costs. Hence, development of energy efficient schemes is very crucial to address this problem. Our research considers the fixed window traffic allocation model and the anycast routing scheme to select the best option for the destination node. Proper routing schemes and appropriate combination of the replicas can take care of the issue for energy utilization and at the same time help diminish costs for the data centers. We have also considered the real-time pricing model (which considers price changes every hour) to select routes for the lightpaths. Hence, we propose an ILP to handle the energyaware routing and wavelength assignment (RWA) problem for fixed window scheduled traffic model, with an objective to minimize the overall electricity costs of a datacenter network by reducing the actual power consumption, and using low-cost resources whenever possible

    Survivability with Adaptive Routing and Reactive Defragmentation in IP-over-EON after A Router Outage

    Get PDF
    The occurrence of a router outage in the IP layer can lead to network survivability issues in IP-over-elastic-optical networks with consequent effects on the existing connections used in transiting the router. This usually leads to the application of a path to recover any affected traffic by utilizing the spare capacity of the unaffected lightpath on each link. However, the spare capacity in some links is sometimes insufficient and thus needs to be spectrally expanded. A new lightpath is also sometimes required when it is impossible to implement the first process. It is important to note that both processes normally lead to a large number of lightpath reconfigurations when applied to different unaffected lightpaths. Therefore, this study proposes an adaptive routing strategy to generate the best path with the ability to optimize the use of unaffected lightpaths to perform reconfiguration and minimize the addition of free spectrum during the expansion process. The reactive defragmentation strategy is also applied when it is impossible to apply spectrum expansion because of the obstruction of the neighboring spectrum. This proposed strategy is called lightpath reconfiguration and spectrum expansion with reactive defragmentation (LRSE+RD), and its performance was compared to the first Shortest Path (1SP) as the benchmark without a reactive defragmentation strategy. The simulation conducted for the two topologies with two traffic conditions showed that LRSE+RD succeeded in reducing the lightpath reconfigurations, new lightpath number, and additional power consumption, including the additional operational expense compared to 1SP

    A Server Consolidation Solution

    Get PDF
    Advances in server architecture has enabled corporations the ability to strategically redesign their data centers in order to realign the system infrastructure to business needs. The architectural design of physically and logically consolidating servers into fewer and smaller hardware platforms can reduce data center overhead costs, while adding quality of service. In order for the organization to take advantage of the architectural opportunity a server consolidation project was proposed that utilized blade technology coupled with the virtualization of servers. Physical consolidation reduced the data center facility requirements, while server virtualization reduced the number of required hardware platforms. With the constant threat of outsourcing, coupled with the explosive growth of the organization, the IT managers were challenged to provide increased system services and functionality to a larger user community, while maintaining the same head count. A means of reducing overhead costs associated with the in-house data center was to reduce the required facility and hardware resources. The reduction in the data center footprint required less real estate, electricity, fire suppression infrastructure, and HVAC utilities. In addition, since the numerous stand alone servers were consolidated onto a standard platform system administration became more agile to business opportunities.
    corecore