58,613 research outputs found

    Math modeling for helicopter simulation of low speed, low altitude and steeply descending flight

    Get PDF
    A math model was formulated to represent some of the aerodynamic effects of low speed, low altitude, and steeply descending flight. The formulation is intended to be consistent with the single rotor real time simulation model at NASA Ames Research Center. The effect of low speed, low altitude flight on main rotor downwash was obtained by assuming a uniform plus first harmonic inflow model and then by using wind tunnel data in the form of hub loads to solve for the inflow coefficients. The result was a set of tables for steady and first harmonic inflow coefficients as functions of ground proximity, angle of attack, and airspeed. The aerodynamics associated with steep descending flight in the vortex ring state were modeled by replacing the steady induced downwash derived from momentum theory with an experimentally derived value and by including a thrust fluctuations effect due to vortex shedding. Tables of the induced downwash and the magnitude of the thrust fluctuations were created as functions of angle of attack and airspeed

    Weak Lensing as a Calibrator of the Cluster Mass-Temperature Relation

    Full text link
    The abundance of clusters at the present epoch and weak gravitational lensing shear both constrain roughly the same combination of the power spectrum normalization sigma_8 and matter energy density Omega_M. The cluster constraint further depends on the normalization of the mass-temperature relation. Therefore, combining the weak lensing and cluster abundance data can be used to accurately calibrate the mass-temperature relation. We discuss this approach and illustrate it using data from recent surveys.Comment: Matches the version in ApJL. Equation 4 corrected. Improvements in the analysis move the cluster contours in Fig1 slightly upwards. No changes in the conclusion

    An experimental and theoretical evaluation of increased thermal diffusivity phase change devices

    Get PDF
    This study was to experimentally evaluate and mathematically model the performance of phase change thermal control devices containing high thermal conductivity metal matrices. Three aluminum honeycomb filters were evaluated at five different heat flux levels using n-oct-adecane as the test material. The system was mathematically modeled by approximating the partial differential equations with a three-dimensional implicit alternating direction technique. The mathematical model predicts the system quite well. All of the phase change times are predicted. The heating of solid phase is predicted exactly while there is some variation between theoretical and experimental results in the liquid phase. This variation in the liquid phase could be accounted for by the fact that there are some heat losses in the cell and there could be some convection in the experimental system

    Assessment of Neuropsychological Trajectories in Longitudinal Population-Based Studies of Children

    Get PDF
    This paper provides a strategy for the assessment of brain function in longitudinal cohort studies of children. The proposed strategy invokes both domain-specific and omnibus intelligence test approaches. In order to minimise testing burden and practice effects, the cohort is divided into four groups with one-quarter tested at 6-monthly intervals in the 0–2-year age range (at ages 6 months, 1.0, 1.5 and 2.0 years) and at annual intervals from ages 3–20 (one-quarter of the children at age 3, another at age 4, etc). This strategy allows investigation of cognitive development and of the relationship between environmental influences and development at each age. It also allows introduction of new domains of function when age-appropriate. As far as possible, tests are used that will provide a rich source of both longitudinal and cross-sectional data. The testing strategy allows the introduction of novel tests and new domains as well as piloting of tests when the test burden is relatively light. In addition to the recommended tests for each age and domain, alternative tests are described. Assessment methodology and knowledge about child cognitive development will change over the next 20 years, and strategies are suggested for altering the proposed test schedule as appropriate

    The Growth in Size and Mass of Cluster Galaxies since z=2

    Full text link
    We study the formation and evolution of Brightest Cluster Galaxies starting from a z=2z=2 population of quiescent ellipticals and following them to z=0z=0. To this end, we use a suite of nine high-resolution dark matter-only simulations of galaxy clusters in a Λ\LambdaCDM universe. We develop a scheme in which simulation particles are weighted to generate realistic and dynamically stable stellar density profiles at z=2z=2. Our initial conditions assign a stellar mass to every identified dark halo as expected from abundance matching; assuming there exists a one-to-one relation between the visible properties of galaxies and their host haloes. We set the sizes of the luminous components according to the observed relations for z2z\sim2 massive quiescent galaxies. We study the evolution of the mass-size relation, the fate of satellite galaxies and the mass aggregation of the cluster central. From z=2z=2, these galaxies grow on average in size by a factor 5 to 10 of and in mass by 2 to 3. The stellar mass growth rate of the simulated BCGs in our sample is of 1.9 in the range 0.3<z<1.00.3<z<1.0 consistent with observations, and of 1.5 in the range 0.0<z<0.30.0<z<0.3. Furthermore the satellite galaxies evolve to the present day mass-size relation by z=0z=0. Assuming passively evolving stellar populations, we present surface brightness profiles for our cluster centrals which resemble those observed for the cDs in similar mass clusters both at z=0z=0 and at z=1z=1. This demonstrates that the Λ\LambdaCDM cosmology does indeed predict minor and major mergers to occur in galaxy clusters with the frequency and mass ratio distribution required to explain the observed growth in size of passive galaxies since z=2z=2. Our experiment shows that Brightest Cluster Galaxies can form through dissipationless mergers of quiescent massive z=2z=2 galaxies, without substantial additional star formation.Comment: submitted to MNRAS, 10 pages, 8 figures, 2 table

    Is Cosmology Solved?

    Get PDF
    We have fossil evidence from the thermal background radiation that our universe expanded from a considerably hotter denser state. We have a well defined and testable description of the expansion, the relativistic Friedmann-Lemaitre model. Its observational successes are impressive but I think hardly enough for a convincing scientific case. The lists of observational constraints and free hypotheses within the model have similar lengths. The scorecard on the search for concordant measures of the mass density parameter and the cosmological constant shows that the high density Einstein-de Sitter model is challenged, but that we cannot choose between low density models with and without a cosmological constant. That is, the relativistic model is not strongly overconstrained, the usual test of a mature theory. Work in progress will greatly improve the situation and may at last yield a compelling test. If so, and the relativistic model survives, it will close one line of research in cosmology: we will know the outlines of what happened as our universe expanded and cooled from high density. It will not end research: some of us will occupy ourselves with the details of how galaxies and other large-scale structures came to be the way they are, others with the issue of what our universe was doing before it was expanding. The former is being driven by rapid observational advances. The latter is being driven mainly by theory, but there are hints of observational guidance.Comment: 13 pages, 3 figures. To be published in PASP as part of the proceedings of the Smithsonian debate, Is Cosmology Solved

    Minimally Entangled Typical Thermal State Algorithms

    Full text link
    We discuss a method based on sampling minimally entangled typical thermal states (METTS) that can simulate finite temperature quantum systems with a computational cost comparable to ground state DMRG. Detailed implementations of each step of the method are presented, along with efficient algorithms for working with matrix product states and matrix product operators. We furthermore explore how properties of METTS can reveal characteristic order and excitations of systems and discuss why METTS form an efficient basis for sampling. Finally, we explore the extent to which the average entanglement of a METTS ensemble is minimal.Comment: 18 pages, 14 figure

    A New Algorithm for Computing Statistics of Weak Lensing by Large-Scale Structure

    Full text link
    We describe an efficient algorithm for calculating the statistics of weak lensing by large-scale structure based on a tiled set of independent particle-mesh N-body simulations which telescope in resolution along the line of sight. This efficiency allows us to predict not only the mean properties of lensing observables such as the power spectrum, skewness and kurtosis of the convergence, but also their sampling errors for finite fields of view, which are themselves crucial for assessing the cosmological significance of observations. We find that the nongaussianity of the distribution substantially increases the sampling errors for the skewness and kurtosis in the several to tens of arcminutes regime, whereas those for the power spectrum are only fractionally increased even out to wavenumbers where shot noise from the intrinsic ellipticities of the galaxies will likely dominate the errors.Comment: 12 pages, 13 figures; minor changes reflect accepted versio
    corecore