16,977 research outputs found

    User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    Get PDF
    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts

    Langley Mach 4 scramjet test facility

    Get PDF
    An engine test facility was constructed at the NASA Langley Research Center in support of a supersonic combustion ramjet (scramjet) technology development program. Hydrogen combustion in air with oxygen replenishment provides simulated air at Mach 4 flight velocity, pressure, and true total temperature for an altitude range from 57,000 to 86,000 feet. A facility nozzle with a 13 in square exit produces a Mach 3.5 free jet flow for engine propulsion tests. The facility is described and calibration results are presented which demonstrate the suitability of the test flow for conducting scramjet engine research

    Performance of Landfills Under Seismic Loading

    Get PDF
    The record of performance of landfills in earthquakes is excellent. However, the advent of geosynthetic liner and cover systems has increased the susceptibility of modern landfills to seismically-induced instability and deformations. Analyses used to assess the performance of landfills in earthquakes include site response, limit equilibrium stability, and Newmark deformation analyses. Well documented case histories of the behavior of landfills subject to seismic loading are necessary to improve knowledge of the parameters required for these analyses and thereby enhance the reliability of seismic performance evaluations for landfills

    User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 3: ADD code coordinate generator

    Get PDF
    This User's Manual contains a complete description of the computer codes known as the Axisymmetric Diffuser Duct (ADD) code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts

    A Constrained Path Quantum Monte Carlo Method for Fermion Ground States

    Full text link
    We propose a new quantum Monte Carlo algorithm to compute fermion ground-state properties. The ground state is projected from an initial wavefunction by a branching random walk in an over-complete basis space of Slater determinants. By constraining the determinants according to a trial wavefunction ΨT|\Psi_T \rangle, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if ΨT|\Psi_T\rangle is exact. We report results on the two-dimensional Hubbard model up to size 16×1616\times 16, for various electron fillings and interaction strengths.Comment: uuencoded compressed postscript file. 5 pages with 1 figure. accepted by PRL

    Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration

    Get PDF
    Radio metric data from the Pioneer 10/11, Galileo, and Ulysses spacecraft indicate an apparent anomalous, constant, acceleration acting on the spacecraft with a magnitude 8.5×108\sim 8.5\times 10^{-8} cm/s2^2, directed towards the Sun. Two independent codes and physical strategies have been used to analyze the data. A number of potential causes have been ruled out. We discuss future kinematic tests and possible origins of the signal.Comment: Revtex, 4 pages and 1 figure. Minor changes for publicatio

    The Near-Infrared and Optical Spectra of Methane Dwarfs and Brown Dwarfs

    Get PDF
    We identify the pressure--broadened red wings of the saturated potassium resonance lines at 7700 \AA as the source of anomalous absorption seen in the near-infrared spectra of Gliese 229B and, by extension, of methane dwarfs in general. This conclusion is supported by the recent work of Tsuji {\it et al.} 1999, though unlike them we find that dust need not be invoked to explain the spectra of methane dwarfs shortward of 1 micron. We find that a combination of enhanced alkali abundances due to rainout and a more realistic non-Lorentzian theory of resonant line shapes may be all that is needed to properly account for these spectra from 0.5 to 1.0 microns. The WFPC2 II measurement of Gliese 229B is also consistent with this theory. Furthermore, a combination of the blue wings of this K I resonance doublet, the red wings of the Na D lines at 5890 \AA, and, perhaps, the Li I line at 6708 \AA can explain in a natural way the observed WFPC2 RR band flux of Gliese 229B. Hence, we conclude that the neutral alkali metals play a central role in the near-infrared and optical spectra of methane dwarfs and that their lines have the potential to provide crucial diagnostics of brown dwarfs. We speculate on the systematics of the near-infrared and optical spectra of methane dwarfs, for a given mass and composition, that stems from the progressive burial with decreasing \teff of the alkali metal atoms to larger pressures and depths.Comment: Revised and accepted to Ap.J. volume 531, March 1, 2000, also available at http://jupiter.as.arizona.edu/~burrows/papers/BMS.p

    Fast Ensemble Smoothing

    Full text link
    Smoothing is essential to many oceanographic, meteorological and hydrological applications. The interval smoothing problem updates all desired states within a time interval using all available observations. The fixed-lag smoothing problem updates only a fixed number of states prior to the observation at current time. The fixed-lag smoothing problem is, in general, thought to be computationally faster than a fixed-interval smoother, and can be an appropriate approximation for long interval-smoothing problems. In this paper, we use an ensemble-based approach to fixed-interval and fixed-lag smoothing, and synthesize two algorithms. The first algorithm produces a linear time solution to the interval smoothing problem with a fixed factor, and the second one produces a fixed-lag solution that is independent of the lag length. Identical-twin experiments conducted with the Lorenz-95 model show that for lag lengths approximately equal to the error doubling time, or for long intervals the proposed methods can provide significant computational savings. These results suggest that ensemble methods yield both fixed-interval and fixed-lag smoothing solutions that cost little additional effort over filtering and model propagation, in the sense that in practical ensemble application the additional increment is a small fraction of either filtering or model propagation costs. We also show that fixed-interval smoothing can perform as fast as fixed-lag smoothing and may be advantageous when memory is not an issue

    Measuring Accuracy of Automated Parsing and Categorization Tools and Processes in Digital Investigations

    Full text link
    This work presents a method for the measurement of the accuracy of evidential artifact extraction and categorization tasks in digital forensic investigations. Instead of focusing on the measurement of accuracy and errors in the functions of digital forensic tools, this work proposes the application of information retrieval measurement techniques that allow the incorporation of errors introduced by tools and analysis processes. This method uses a `gold standard' that is the collection of evidential objects determined by a digital investigator from suspect data with an unknown ground truth. This work proposes that the accuracy of tools and investigation processes can be evaluated compared to the derived gold standard using common precision and recall values. Two example case studies are presented showing the measurement of the accuracy of automated analysis tools as compared to an in-depth analysis by an expert. It is shown that such measurement can allow investigators to determine changes in accuracy of their processes over time, and determine if such a change is caused by their tools or knowledge.Comment: 17 pages, 2 appendices, 1 figure, 5th International Conference on Digital Forensics and Cyber Crime; Digital Forensics and Cyber Crime, pp. 147-169, 201
    corecore