50,004 research outputs found

    Systems study for an Integrated Digital-Electric Aircraft (IDEA)

    Get PDF
    The results of the Integrated Digital/Electric Aircraft (IDEA) Study are presented. Airplanes with advanced systems were, defined and evaluated, as a means of identifying potential high payoff research tasks. A baseline airplane was defined for comparison, typical of a 1990's airplane with advanced active controls, propulsion, aerodynamics, and structures technology. Trade studies led to definition of an IDEA airplane, with extensive digital systems and electric secondary power distribution. This airplane showed an improvement of 3% in fuel use and 1.8% in DOC relative to the baseline configuration. An alternate configuration, an advanced technology turboprop, was also evaluated, with greater improvement supported by digital electric systems. Recommended research programs were defined for high risk, high payoff areas appropriate for implementation under NASA leadership

    The aerodynamic challenges of SRB recovery

    Get PDF
    Recovery and reuse of the Space Shuttle solid rocket boosters was baselined to support the primary goal to develop a low cost space transportation system. The recovery system required for the 170,000-lb boosters was for the largest and heaviest object yet to be retrieved from exoatmospheric conditions. State-of-the-art design procedures were ground-ruled and development testing minimized to produce both a reliable and cost effective system. The ability to utilize the inherent drag of the boosters during the initial phase of reentry was a key factor in minimizing the parachute loads, size and weight. A wind tunnel test program was devised to enable the accurate prediction of booster aerodynamic characteristics. Concurrently, wind tunnel, rocket sled and air drop tests were performed to develop and verify the performance of the parachute decelerator subsystem. Aerodynamic problems encountered during the overall recovery system development and the respective solutions are emphasized

    vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    Full text link
    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card containing 12 GB of memory, with 18% performance loss compared to a hypothetical, oracular GPU with enough memory to hold the entire DNN.Comment: Published as a conference paper at the 49th IEEE/ACM International Symposium on Microarchitecture (MICRO-49), 201
    corecore