3,468 research outputs found

    Vortices in Thin, Compressible, Unmagnetized Disks

    Full text link
    We consider the formation and evolution of vortices in a hydrodynamic shearing-sheet model. The evolution is done numerically using a version of the ZEUS code. Consistent with earlier results, an injected vorticity field evolves into a set of long-lived vortices, each of which has a radial extent comparable to the local scale height. But we also find that the resulting velocity field has a positive shear stress, . This effect appears only at high resolution. The transport, which decays with time as t^-1/2, arises primarily because the vortices drive compressive motions. This result suggests a possible mechanism for angular momentum transport in low-ionization disks, with two important caveats: a mechanism must be found to inject vorticity into the disk, and the vortices must not decay rapidly due to three-dimensional instabilities.Comment: 8 pages, 10 figures (high resolution figures available in ApJ electronic edition

    Pterodactyl: The Development and Performance of Guidance Algorithms for a Mechanically Deployed Entry Vehicle

    Get PDF
    Pterodactyl is a NASA Space Technology Mission Directorate (STMD) project focused on developing a design capability for optimal, scalable, Guidance and Control (G&C) solutions that enable precision targeting for Deployable Entry Vehicles (DEVs). This feasibility study is unique in that it focuses on the rapid integration of targeting performance analysis with structural & packaging analysis, which is especially challenging for new vehicle and mission designs. This paper will detail the guidance development and trajectory design process for a lunar return mission, selected to stress the vehicle designs and encourage future scalability. For the five G&C configurations considered, the Fully Numerical Predictor-Corrector Entry Guidance (FNPEG) was selected for configurations requiring bank angle guidance and FNPEG with Uncoupled Range Control (URC) was developed for configurations requiring angle of attack and sideslip angle guidance. Successful G&C configurations are defined as those that can deliver payloads to the intended descent and landing initiation point, while abiding by trajectory constraints for nominal and dispersed trajectories

    MaxEnt power spectrum estimation using the Fourier transform for irregularly sampled data applied to a record of stellar luminosity

    Full text link
    The principle of maximum entropy is applied to the spectral analysis of a data signal with general variance matrix and containing gaps in the record. The role of the entropic regularizer is to prevent one from overestimating structure in the spectrum when faced with imperfect data. Several arguments are presented suggesting that the arbitrary prefactor should not be introduced to the entropy term. The introduction of that factor is not required when a continuous Poisson distribution is used for the amplitude coefficients. We compare the formalism for when the variance of the data is known explicitly to that for when the variance is known only to lie in some finite range. The result of including the entropic measure factor is to suggest a spectrum consistent with the variance of the data which has less structure than that given by the forward transform. An application of the methodology to example data is demonstrated.Comment: 15 pages, 13 figures, 1 table, major revision, final version, Accepted for publication in Astrophysics & Space Scienc

    Kinetic and Structural Analysis of Substrate Specificity in Two Copper Amine Oxidases from Hansenula polymorpha

    Get PDF
    The structural underpinnings of enzyme substrate specificity are investigated in a pair of copper amine oxidases (CAOs) from Hansenula polymorpha (HPAO-1 and HPAO-2). The X-ray crystal structure (to 2.0 Ă… resolution) and steady state kinetic data of the second copper amine oxidase (HPAO-2) are presented for comparison to HPAO-1. Despite 34 % sequence identity and superimposable active site residues implicated in catalysis, the enzymes vary considerably in their substrate entry channel. The previously studied CAO, HPAO-1, has a narrow substrate channel. In contrast HPAO-2 has a wide funnel-shaped substrate channel, which also contains a side-chamber. In addition, there are a number of amino acid changes within the channels of HPAO-2 and HPAO-1 that may sterically impact the ability of substrates to form covalent Schiff base catalytic intermediates and to initiate chemistry. These differences can partially explain the greatly different substrate specificities as characterized by kcat/Km value differences: in HPAO-1, the kcat/Km for methylamine is 330-fold greater than for benzylamine, whereas in HPAO-2 it is benzylamine that is the better substrate by 750-fold. In HPAO-2 an inflated Dkcat/Km(methylamine) in relation to Dkcat/Km(benzylamine) indicates that proton abstraction has been impeded more than substrate release. In HPAO-1, Dkcat/Km(S) changes little with the slow substrate, and indicates a similar increase in the energy barriers that control both substrate binding and subsequent catalysis. In neither case is kcat/Km for the second substrate, O2, significantly altered. These results reinforce the modular nature of the active sites of CAOs and show that multiple factors contribute to substrate specificity and catalytic efficiency. In HPAO-1, the enzyme with the smaller substrate binding pocket, both initial substrate binding and proton loss are affected by an increase in substrate size, while in HPAO-2, the enzyme with the larger substrate binding pocket, the rate of proton loss is differentially affected when a phenyl substituent in substrate is reduced to the size of a methyl group

    Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    Full text link
    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the time step is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves the divergence-free constraint to machine precision. The main restriction is that the magnetic field must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.Comment: 32 pages, 18 figures, accepted for publication in The Astrophysical Journal. Contains an additional appendix providing more details for some of the test problems (to be published as an addendum in the ApJS December 2008, v179n2 issue

    STATUS OF SOIL ELECTRICAL CONDUCTIVITY STUDIES BY CENTRAL STATE RESEARCHERS

    Get PDF
    Practical tools are needed to identify and advance sustainable management practices to optimize economic return, conserve soil, and minimize negative off-site environmental effects. The objective of this article is to review current research in non-saline soils of the central U.S. to consider bulk soil electrical conductivity (ECa) as an assessment tool for: (1) tracking N dynamics, (2) identifying management zones, (3) monitoring soil quality trends, and (4) designing and evaluating field-scale experiments. The interpretation and utility of ECa are highly location and soil specific; soil properties contributing to measured ECa must be clearly understood. In soils where ECa is driven by NO3-N, ECa has been used to track spatial and temporal variations in crop-available N (manure, compost, commercial fertilizer, and cover crop treatments) and rapidly assess N mineralization early in the growing season to calculate fertilizer rates for site-specific management (SSM). Selection of appropriate ECa sensors (direct contact, electromagnetic induction, or time domain reflectometry) may improve sensitivity to N fluctuations at specific soil depths. In a dryland cropping system where clay content dominates measured ECa, ECa -based management zones delineated soil productivity characteristics and crop yields. These results provided a framework effective for SSM, monitoring management-induced trends in soil quality, and appraising and statistically evaluating field-scale experiments. Use of ECa may foster a large-scale systems approach to research that encourages farmer involvement. Additional research is needed to investigate the interactive effects of soil, weather, and management on ECa as an assessment tool, and the geographic extent to which specific applications of this technology can be applied

    GENERALIZED LINEAR MIXED MODEL ESTIMATION USING PROC GLIMMIX: RESULTS FROM SIMULATIONS WHEN THE DATA AND MODEL MATCH, AND WHEN THE MODEL IS MISSPECIFIED

    Get PDF
    A simulation study was conducted to determine how well SAS® PROC GLIMMIX (SAS Institute, Cary, NC), statistical software to fit generalized linear mixed models (GLMMs), performed for a simple GLMM, using its default settings, as a naïve user would do. Data were generated from a wide variety of distributions with the same sets of linear predictors, and under several conditions. Then, the data sets were analyzed by using the correct model (the generating model and estimating model were the same) and, subsequently, by misspecifying the estimating model, all using default settings. The data generation model was a randomized complete block design where the model parameters and sample sizes were adjusted to yield 80% power for the F-test on treatment means given a 30 block experiment with block-by-treatment interaction and with additional treatment replications within each block. Convergence rates were low for the exponential and Poisson distributions, even when the generating and estimating models matched. The normal and lognormal distributions converged 100% of the time; convergence rates for other distributions varied. As expected, reducing the number of blocks from 30 to five and increasing replications within blocks to keep total N the same reduced power to 40% or less. Except for the exponential distribution, estimates of treatment means and variance parameters were accurate with only slight biases. Misspecifying the estimating model by omitting the block-by-treatment random effect made F-tests too liberal. Since omitting that term from the model, effectively ignoring a process involved in giving rise to the data, produces symptoms of over-dispersion, several potential remedies were investigated. For all distributions, the historically recommended variance stabilizing transformation was applied, and then the transformed data were fit using a linear mixed model. For one-parameter members of the exponential family an over-dispersion parameter was included in the estimating model. The negative binomial distribution was also examined as the estimating model distribution. None of these remedial steps corrected the over-dispersion problem created by misspecifying the linear predictor, although using a variance stabilizing transformation did improve convergence rates on most distributions investigated

    Primary care consultations and costs among HIV-positive individulas in UK primary care 1995-2005: a cohort study

    Get PDF
    Objectives: To investigate the role of primary care in the management of HIV and estimate primary care-associated costs at a time of rising prevalence. Methods: Retrospective cohort study between 1995 and 2005, using data from general practices contributing data to the UK General Practice Research Database. Patterns of consultation and morbidity and associated consultation costs were analysed among all practice-registered patients for whom HIV-positive status was recorded in the general practice record. Results: 348 practices yielded 5504 person-years (py) of follow-up for known HIV-positive patients, who consult in general practice frequently (4.2 consultations/py by men, 5.2 consultations/py by women, in 2005) for a range of conditions. Consultation rates declined in the late 1990s from 5.0 and 7.3 consultations/py in 1995 in men and women, respectively, converging to rates similar to the wider population. Costs of consultation (general practitioner and nurse, combined) reflect these changes, at ÂŁ100.27 for male patients and ÂŁ117.08 for female patients in 2005. Approximately one in six medications prescribed in primary care for HIV-positive individuals has the potential for major interaction with antiretroviral medications. Conclusion: HIV-positive individuals known in general practice now consult on a similar scale to the wider population. Further research should be undertaken to explore how primary care can best contribute to improving the health outcomes of this group with chronic illness. Their substantial use of primary care suggests there may be potential to develop effective integrated care pathways

    A Titanium Nitride Absorber for Controlling Optical Crosstalk in Horn-Coupled Aluminum LEKID Arrays for Millimeter Wavelengths

    Full text link
    We discuss the design and measured performance of a titanium nitride (TiN) mesh absorber we are developing for controlling optical crosstalk in horn-coupled lumped-element kinetic inductance detector arrays for millimeter-wavelengths. This absorber was added to the fused silica anti-reflection coating attached to previously-characterized, 20-element prototype arrays of LEKIDs fabricated from thin-film aluminum on silicon substrates. To test the TiN crosstalk absorber, we compared the measured response and noise properties of LEKID arrays with and without the TiN mesh. For this test, the LEKIDs were illuminated with an adjustable, incoherent electronic millimeter-wave source. Our measurements show that the optical crosstalk in the LEKID array with the TiN absorber is reduced by 66\% on average, so the approach is effective and a viable candidate for future kilo-pixel arrays.Comment: 7 pages, 5 figures, accepted for publication in the Journal of Low Temperature Physic
    • …
    corecore