2,784 research outputs found

    A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    Get PDF
    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport

    Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    Get PDF
    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era

    Feasibility study of an Integrated Program for Aerospace-vehicle Design (IPAD) system. Volume 2: Characterization of the IPAD system, phase 1, task 1

    Get PDF
    The aircraft design process is discussed along with the degree of participation of the various engineering disciplines considered in this feasibility study

    3rd EGEE User Forum

    Get PDF
    We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum

    Transferring big data across the globe

    Get PDF
    Transmitting data via the Internet is a routine and common task for users today. The amount of data being transmitted by the average user has dramatically increased over the past few years. Transferring a gigabyte of data in an entire day was normal, however users are now transmitting multiple gigabytes in a single hour. With the influx of big data and massive scientific data sets that are measured in tens of petabytes, a user has the propensity to transfer even larger amounts of data. When transferring data sets of this magnitude on public or shared networks, the performance of all workloads in the system will be impacted. This dissertation addresses the issues and challenges inherent with transferring big data over shared networks. A survey of current transfer techniques is provided and these techniques are evaluated in simulated, experimental and live environments. The main contribution of this dissertation is the development of a new, nice model for big data transfers, which is based on a store-and-forward methodology instead of an end-to-end approach. This nice model ensures that big data transfers only occur when there is idle bandwidth that can be repurposed for these large transfers. The nice model improves overall performance and significantly reduces the transmission time for big data transfers. The model allows for efficient transfers regardless of time zone differences or variations in bandwidth between sender and receiver. Nice is the first model that addresses the challenges of transferring big data across the globe

    Grid technology in tissue-based diagnosis: fundamentals and potential developments

    Get PDF
    Tissue-based diagnosis still remains the most reliable and specific diagnostic medical procedure. It is involved in all technological developments in medicine and biology and incorporates tools of quite different applications. These range from molecular genetics to image acquisition and recognition algorithms (for image analysis), or from tissue culture to electronic communication services. Grid technology seems to possess all features to efficiently target specific constellations of an individual patient in order to obtain a detailed and accurate diagnosis in providing all relevant information and references. Grid technology can be briefly explained by so-called nodes that are linked together and share certain communication rules in using open standards. The number of nodes can vary as well as their functionality, depending on the needs of a specific user at a given point in time. In the beginning of grid technology, the nodes were used as supercomputers in combining and enhancing the computation power. At present, at least five different Grid functions can be distinguished, that comprise 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. The general structures and functions of a Grid are described, and their potential implementation into virtual tissue-based diagnosis is analyzed. As a result Grid technology offers a new dimension to access distributed information and knowledge and to improving the quality in tissue-based diagnosis and therefore improving the medical quality

    Validation of AnnAGNPS at the field and farm-scale using an integrated AGNPS/GIS system

    Get PDF
    Non-Point Source (NPS) pollution models are effective watershed-scale predictors of NPS loadings and useful evaluators of agricultural Best Management Practices (BMPs) and water quality Total Maximum Daily Loads (TMDLs). The work reported in this thesis examined two applications of the AGricultural Non-Point-Source (AGNPS) pollution model: 1) predicting surface runoff, nutrient loading, and sediment yield predictions for an artificially delineated farm-scale watershed; and 2) evaluating relative benefits of different BMPs on reducing sediment accumulation in a lake surrounded by agricultural land. A procedure using identification, extraction, and processing of critical area data using an ArcView Geographic Information System (GIS) was used in both applications. In the first, 30 years of synthetic climate data were used to generate event and source accounting predictions for a multi-use 600-acre research farm in South Louisiana. Runoff water quality predictions for hydrologic cells in standard and artificially delineated watershed simulations were compared. Estimates for sediment, N and P loading in paired watershed cells agreed well, indicating that an integrated AGNPS/GIS system can reliably simulate runoff and NPS loadings for artificially delineated watersheds. Thus, successful implementation of AGNPS for an extracted small-scale region eliminated processing extraneous data, hence reducing simulation time and work required. This approach could allow land operators to initiate and/or evaluate nutrient and site management plans. The second application used AGNPS to evaluate benefits of different BMPs on reducing sedimentation in a small lake. Extensive land clearing in the 1970s for row crop production in Avoyelles Parish accelerated sediment deposition in local waterbodies. Data for depth of the original bottom of an approximately 2 ha lake below recent (\u3c 30 years) sediment estimated from 137Cs, Pb, clay and organic matter profiles), and sediment bulk density and texture were used to calibrate the AGNPS water quality model for representative hydrologic cells discharging into this lake. Upland erosion and sediment discharge rates predicted under alternative, conservation management practices indicate that sediment accumulation in this lake could have been substantially reduced

    1992 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    For the 28th consecutive year, a NASA/ASEE Summer Faculty Fellowship Program was conducted at the Marshall Space Flight Center (MSFC). The program was conducted by the University of Alabama and MSFC during the period June 1, 1992 through August 7, 1992. Operated under the auspices of the American Society for Engineering Education, the MSFC program, was well as those at other centers, was sponsored by the Office of Educational Affairs, NASA Headquarters, Washington, DC. The basic objectives of the programs, which are the 29th year of operation nationally, are (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate and exchange ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA centers
    • …
    corecore