153 research outputs found

    In situ exhaust cloud measurements

    Get PDF
    Airborne in situ exhaust cloud measurements were conducted to obtain definitions of cloud particle size range, Cl2 content, and HCl partitioning. Particle size distribution data and Cl2 measurements were made during the May, August, and September 1977 Titan launches. The measurements of three basic effluents - HCl, NO sub X, and particles - against minutes after launch are plotted. The maximum observed HCl concentration to the maximum Cl2 concentration are compared and the ratios of the Cl2 to the HCl is calculated

    Leading edge curvature based on convective heating Patent

    Get PDF
    Construction of leading edges of surfaces for aerial vehicles performing from subsonic to above transonic speed

    Application of two-point difference schemes to the conservative Euler equations for one-dimensional flows

    Get PDF
    An implicit finite difference method is presented for obtaining steady state solutions to the time dependent, conservative Euler equations for flows containing shocks. The method used the two-point differencing approach of Keller with dissipation added at supersonic points via the retarded density concept. Application of the method to the one-dimensional nozzle flow equations for various combinations of subsonic and supersonic boundary conditions shows the method to be very efficient. Residuals are typically reduced to machine zero in approximately 35 time steps for 50 mesh points. It is shown that the scheme offers certain advantages over the more widely used three-point schemes, especially in regard to application of boundary conditions

    Critical study of higher order numerical methods for solving the boundary-layer equations

    Get PDF
    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows

    Optimizing a CFD Fortran code for GRID Computing

    No full text
    Computations on clusters and computational GRIDS encounter similar situations where the processors used have different speeds and local RAM. In order to have efficient computations with processors of different speeds and local RAM, load balancing is necessary. That is, faster processors are given more work or larger domains to compute than the slower processors so that all processors finish their work at the same time thus avoiding faster processors waiting for the slower processors to finish. In addition, the programming language must permit dynamic memory allocation so that the executable size is proportional to the size of the partitions. The present version of the AERO code uses the F77 programming language which does not have dynamic memory allocation thus the size of the executable is the same for all processors and leads to situations where the RAM for some processors is too small to run the executable. In this report, we extend the parallel F77 AERO code to F90 which has dynamic memory allocation. The F90 version of the AERO code is mesh independent and because memory is allocated at runtime and memory is only allocated for the code options actually used, the size of the F90 executable is much smaller than the F77 version; as a consequence many tests cases that cannot be run on clusters and computational GRIDS with the F77 version can be easily run with the F90 version. Numerical results for a mesh containing 252K vertices using 8-nina and 8-pf processors running on the MecaGRID using GLOBUS using GLOBUS and heterogeneous partitions resulted in a speedup of 1.45 relative to the same run using the homogeneous partitioning which compares well with the theoretical speedup of 1.5. This validates the efficiency of the F90 version of the AERO code

    Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    Get PDF
    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made

    Dynamic Load balancing and CFD Simulations on the MecaGRID and GRID5000

    Get PDF
    CFD simulations on Clusters and GRIDS having mixed processor speeds present several challenges to achieve efficient load balancing. If both the fast and slow processors are given the same amount of work, the faster processors will finish their computations first and wait for the slower processors to finish. To achieve load balancing more work must be given to the faster processors so that all the processors finish their computations at the same time (work is proportional to the processor mesh size). Another complication is that for current Clusters and GRIDS in the near future, the user will not know in advance the mixture of fast and slow processors that will be assigned to their computation, thus the user cannot partition the mesh in advance of the CFD simulation. This difficulty is doubly complicated as the mesh partitioning step is usually performed on a workstation thus not directly linked to the parallel CFD code. For mesh partitioners executing on parallel computers, the complication arises in that the mesh partitioning code and the CFD code are separate MPI codes designed to be run independently of each other. As a result the two codes cannot simply be run back-to-back as each code may be assigned different mixtures of fast and slow processors resulting in a partitioned mesh not optimal for the CFD run. In this study, in order to overcome the problems related to computing with arbitrary mixtures of fast and slow processors, the mesh generator has been integrated into the CFD code. Thus optimal size partitions are automatically created for different mixtures of fast and slow processors. The efficiency of this approach is demonstrated for Clusters, the MecaGRID and the GRID5000. Validation tests using the GRID5000, the MecaGRID, and the INRIA nina-pf cluster produced speedups on the order of 1.32 to 1.52 relative to the same run using the homogeneous partitioning which compares well with the theoretical speedup of 1.5. Finally, we use the dynamic computing capability of the new CFD code to compute a 20 million vertices mesh using 256 processors at five different GRID5000 sites

    Parallel computations of one-phase and two-phase flows using the MecaGRID

    Get PDF
    The present report examines the application to Grid Computing of two fluid simulation codes. The first code, AERO-F, simulates aerodynamic flows for external and internal non-reacting flows. The second code, AEDIF, simulates two-phase compressible flows. The work examines the application of these codes executing on parallel processors using the Message Passing Interface (MPI) on the MecaGRID that connects the clusters of INRIA Sophia Antipolis, Ecole des Mines de Paris at Sophia Antipolis, and the IUSTI in Marseille. Methods to optimize the MecaGRID applications accounting for the different processor speeds at the different sites and different RAM sizes are presented. The Globus Alliance software is used for the Grid applications. Suggestions for future research are give

    Optimizing a CFD Fortran code for GRID Computing

    Get PDF
    Computations on clusters and computational GRIDS encounter similar situations where the processors used have different speeds and local RAM. In order to have efficient computations with processors of different speeds and local RAM, load balancing is necessary. That is, faster processors are given more work or larger domains to compute than the slower processors so that all processors finish their work at the same time thus avoiding faster processors waiting for the slower processors to finish. In addition, the programming language must permit dynamic memory allocation so that the executable size is proportional to the size of the partitions. The present version of the AERO code uses the F77 programming language which does not have dynamic memory allocation thus the size of the executable is the same for all processors and leads to situations where the RAM for some processors is too small to run the executable. In this report, we extend the parallel F77 AERO code to F90 which has dynamic memory allocation. The F90 version of the AERO code is mesh independent and because memory is allocated at runtime and memory is only allocated for the code options actually used, the size of the F90 executable is much smaller than the F77 version; as a consequence many tests cases that cannot be run on clusters and computational GRIDS with the F77 version can be easily run with the F90 version. Numerical results for a mesh containing 252K vertices using 8-nina and 8-pf processors running on the MecaGRID using GLOBUS using GLOBUS and heterogeneous partitions resulted in a speedup of 1.45 relative to the same run using the homogeneous partitioning which compares well with the theoretical speedup of 1.5. This validates the efficiency of the F90 version of the AERO code

    Distance Education in Tidewater Virginia

    Get PDF
    The following research goals were created in order to answer this study: 1. Determine if Tidewater, Virginia business leaders were currently utilizing distance education/distance training concepts in their organizations; 2. Determine if distance education/distance training programs would be beneficial for Tidewater businesses; 3. Determine what types of distance education/distance training programs should be developed and offered to Tidewater businesses
    • …
    corecore