1,391 research outputs found

    The 30/20 GHz flight experiment system, phase 2. Volume 2: Experiment system description

    Get PDF
    A detailed technical description of the 30/20 GHz flight experiment system is presented. The overall communication system is described with performance analyses, communication operations, and experiment plans. Hardware descriptions of the payload are given with the tradeoff studies that led to the final design. The spacecraft bus which carries the payload is discussed and its interface with the launch vehicle system is described. Finally, the hardwares and the operations of the terrestrial segment are presented

    Framework for a space shuttle main engine health monitoring system

    Get PDF
    A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available

    Impact of network interconnection in cloud computing environments for high-performance computing applications

    Get PDF
    The availability of computational resources has changed significantly due to the use of the cloud computing paradigm. Aiming at potential advantages, such as cost savings through the pay-per-use method and scalable/elastic resource allocation, we have witnessed ef forts to execute high-performance computing (HPC) applications in the cloud. Due to the distributed nature of these environments, performance is highly dependent on two primary components of the system: processing power and network interconnection. If allocating more powerful hardware theoretically increases performance, it increases the allocation cost on the other hand. Allocation exclusivity guarantees space for memory, storage, and CPU. This is not the case for the network interconnection since several si multaneous instances (multi-tenants) share the same communication channel, making the network a bottleneck. Therefore, this dissertation aims to analyze the impact of network interconnection on the execution of workloads from the HPC domain. We carried out two different assessments. The first concentrates on different network interconnections (GbE and InfiniBand) in the Microsoft Azure public cloud and costs related to their use. The second focuses on different network configurations using NIC aggregation methodolo gies in a private cloud-controlled environment. The results obtained showed that network interconnection is a crucial aspect and can significantly impact the performance of HPC applications executed in the cloud. In the Azure public cloud, the accelerated networking approach, which allows the instance to have a high-performance interconnection without additional charges, allows significant performance improvements for HPC applications with better cost efficiency. Finally, in the private cloud environment, the NIC aggre gation approach outperformed the baseline up to ≈98% of the executions with applica tions that make intensive use of the network. Also, Balance Round-Robin aggregation mode performed better than 802.3ad aggregation mode in the majority of the executions.A disponibilidade de recursos computacionais mudou significativamente devido ao uso do paradigma de computação em nuvem. Visando vantagens potenciais, como economia de custos por meio do método de pagamento por uso e alocação de recursos escalável/e lástica, testemunhamos esforços para executar aplicações de computação de alto desem penho (HPC) na nuvem. Devido à natureza distribuída desses ambientes, o desempenho é altamente dependente de dois componentes principais do sistema: potência de processa mento e interconexão de rede. Se a alocação de um hardware mais poderoso teoricamente aumenta o desempenho, ele aumenta o custo de alocação, por outro lado. A exclusividade de alocação garante espaço para memória, armazenamento e CPU. Este não é o caso da interconexão de rede, pois várias instâncias simultâneas (multilocatários) compartilham o mesmo canal de comunicação, tornando a rede um gargalo. Portanto, esta dissertação tem como objetivo analisar o impacto da interconexão de redes na execução de cargas de tra balho do domínio HPC. Realizamos duas avaliações diferentes. O primeiro concentra-se em diferentes interconexões de rede (GbE e InfiniBand) na nuvem pública da Microsoft Azure e nos custos relacionados ao seu uso. O segundo se concentra em diferentes confi gurações de rede usando metodologias de agregação de NICs em um ambiente controlado por nuvem privada. Os resultados obtidos mostraram que a interconexão de rede é um aspecto crucial e pode impactar significativamente no desempenho das aplicações HPC executados na nuvem. Na nuvem pública do Azure, a abordagem de rede acelerada, que permite que a instância tenha uma interconexão de alto desempenho sem encargos adici onais, permite melhorias significativas de desempenho para aplicações HPC com melhor custo-benefício. Finalmente, no ambiente de nuvem privada, a abordagem de agrega ção NIC superou a linha de base em até 98% das execuções com aplicações que fazem uso intensivo da rede. Além disso, o modo de agregação Balance Round-Robin teve um desempenho melhor do que o modo de agregação 802.3ad na maioria das execuções

    Large Telescope Experiment Program /LTEP/ Executive summary

    Get PDF
    Feasibility study of two-meter system for large experimental telescope progra

    Advanced Simulation and Computing FY12-13 Implementation Plan, Volume 2, Revision 0.5

    Full text link

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 1: Army fault tolerant architecture overview

    Get PDF
    Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation

    Construction of Prototype Lightweight Mirrors

    Get PDF
    This contract and the work described was in support of a Seven Segment Demonstrator (SSD) and demonstration of a different technology for construction of lightweight mirrors. The objectives of the SSD were to demonstrate functionality and performance of a seven segment prototype array of hexagonal mirrors and supporting electromechanical components which address design issues critical to space optics deployed in large space based telescopes for astronomy and for optics used in spaced based optical communications systems. The SSD was intended to demonstrate technologies which can support the following capabilities; Transportation in dense packaging to existing launcher payload envelopes, then deployable on orbit to form space telescope with large aperture. Provide very large (less than 10 meters) primary reflectors of low mass and cost. Demonstrate the capability to form a segmented primary or quaternary mirror into a quasi-continuous surface with individual subapertures phased so that near diffraction limited imaging in the visible wavelength region is achieved. Continuous compensation of optical wavefront due to perturbations caused by imperfections, natural disturbances, and equipment induced vibrations/deflections to provide near diffraction limited imaging performance in the visible wavelength region. Demonstrate the feasibility of fabricating such systems with reduced mass and cost compared to past approaches. While the SSD could not be expected to satisfy all of the above capabilities, the intent was to start identifying and understanding new technologies that might be applicable to these goals

    Orbit Transfer Rocket Engine Technology Program: Advanced engine study, task D.1/D.3

    Get PDF
    Concepts for space maintainability of OTV engines were examined. An engine design was developed which was driven by space maintenance requirements and by a failure mode and effects (FME) analysis. Modularity within the engine was shown to offer cost benefits and improved space maintenance capabilities. Space operable disconnects were conceptualized for both engine change-out and for module replacement. Through FME mitigation the modules were conceptualized to contain the least reliable and most often replaced engine components. A preliminary space maintenance plan was developed around a controls and condition monitoring system using advanced sensors, controls, and condition monitoring concepts. A complete engine layout was prepared satisfying current vehicle requirements and utilizing projected component advanced technologies. A technology plan for developing the required technology was assembled

    Virtual Organization Clusters: Self-Provisioned Clouds on the Grid

    Get PDF
    Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely affect mean job sojourn time. By load-balancing jobs among grid sites, VOCs can reduce the total amount of queuing on a grid to a level sufficient to counteract the performance overhead introduced by virtualization
    corecore