388 research outputs found

    Influence of shear-thinning blood rheology on the laminar-turbulent transition over a backward facing step

    Get PDF
    Cardiovascular diseases are the leading cause of death globally and there is an unmet need for effective, safer blood-contacting devices, including valves, stents and artificial hearts. In these, recirculation regions promote thrombosis, triggering mechanical failure, neurological dysfunction and infarctions. Transitional flow over a backward facing step is an idealised model of these flow conditions; the aim was to understand the impact of non-Newtonian blood rheology on modelling this flow. Flow simulations of shear-thinning and Newtonian fluids were compared for Reynolds numbers ( R e ) covering the comprehensive range of laminar, transitional and turbulent flow for the first time. Both unsteady Reynolds Averaged Navier–Stokes ( k − ω SST) and Smagorinsky Large Eddy Simulations (LES) were assessed; only LES correctly predicted trends in the recirculation zone length for all R e . Turbulent-transition was assessed by several criteria, revealing a complex picture. Instantaneous turbulent parameters, such as velocity, indicated delayed transition: R e = 1600 versus R e = 2000, for Newtonian and shear-thinning transitions, respectively. Conversely, when using a Re defined on spatially averaged viscosity, the shear-thinning model transitioned below the Newtonian. However, recirculation zone length, a mean flow parameter, did not indicate any difference in the transitional Re between the two. This work shows a shear-thinning rheology can explain the delayed transition for whole blood seen in published experimental data, but this delay is not the full story. The results show that, to accurately model transitional blood flow, and so enable the design of advanced cardiovascular devices, it is essential to incorporate the shear-thinning rheology, and to explicitly model the turbulent eddies

    Leveraging U.S. Army Administrative Data for Individual and Team Performance

    Get PDF
    The Army possesses vast amounts of administrative (archival) data about Soldiers. These data sources include screening tests, personnel action codes, training scores, global assessments, physical fitness scores, and more. However, the Army has yet to integrate these data to create a holistic operating picture. Our research focuses on repurposing Army administrative data to (1) operationalize social constructs of interest to the Army (e.g., Army Values, Warrior Ethos) and (2) model the predictive relationship between these constructs and individual (i.e., Soldier) and team (i.e., unit) performance and readiness. The goal of the project is to provide people analytics models to Army leadership for the purposes of optimizing human capital management decisions. Our talk will describe the theoretical underpinnings of our human performance model, drawing on disciplines such as social and industrial/organizational psychology, as well as our experience gaining access to and working with Army administrative data sources. Access to the archival administrative data is provided through the Army Analytics Group (AAG), Person-event Data Environment (PDE). The PDE is a business intelligence platform that has two central functions: (1) to provide a secure repository for data sources on U.S. military personnel; and (2) to provide a secure collaborative work environment where researchers can access unclassified but sensitive military data

    Optimal Time-Series Selection of Quasars

    Full text link
    We present a novel method for the optimal selection of quasars using time-series observations in a single photometric bandpass. Utilizing the damped random walk model of Kelly et al. (2009), we parameterize the ensemble quasar structure function in Sloan Stripe 82 as a function of observed brightness. The ensemble model fit can then be evaluated rigorously for and calibrated with individual light curves with no parameter fitting. This yields a classification in two statistics --- one describing the fit confidence and one describing the probability of a false alarm --- which can be tuned, a priori, to achieve high quasar detection fractions (99% completeness with default cuts), given an acceptable rate of false alarms. We establish the typical rate of false alarms due to known variable stars as <3% (high purity). Applying the classification, we increase the sample of potential quasars relative to those known in Stripe 82 by as much as 29%, and by nearly a factor of two in the redshift range 2.5<z<3, where selection by color is extremeley inefficient. This represents 1875 new quasars in a 290 deg^2 field. The observed rates of both quasars and stars agree well with the model predictions, with >99% of quasars exhibiting the expected variability profile. We discus the utility of the method at high-redshift and in the regime of noisy and sparse data. Our time series selection complements well independent selection based on quasar colors and has strong potential for identifying high redshift quasars for BAO and other cosmology studies in the LSST era.Comment: 28 pages, 8 figures, 3 tables; Accepted to A

    Improved Standardization of Type II-P Supernovae: Application to an Expanded Sample

    Full text link
    In the epoch of precise and accurate cosmology, cross-confirmation using a variety of cosmographic methods is paramount to circumvent systematic uncertainties. Owing to progenitor histories and explosion physics differing from those of Type Ia SNe (SNe Ia), Type II-plateau supernovae (SNe II-P) are unlikely to be affected by evolution in the same way. Based on a new analysis of 17 SNe II-P, and on an improved methodology, we find that SNe II-P are good standardizable candles, almost comparable to SNe Ia. We derive a tight Hubble diagram with a dispersion of 10% in distance, using the simple correlation between luminosity and photospheric velocity introduced by Hamuy & Pinto 2002. We show that the descendent method of Nugent et al. 2006 can be further simplified and that the correction for dust extinction has low statistical impact. We find that our SN sample favors, on average, a very steep dust law with total to selective extinction R_V<2. Such an extinction law has been recently inferred for many SNe Ia. Our results indicate that a distance measurement can be obtained with a single spectrum of a SN II-P during the plateau phase combined with sparse photometric measurements.Comment: ApJ accepted version. Minor change

    Reverberation Mapping of the Kepler-Field AGN KA1858+4850

    Full text link
    KA1858+4850 is a narrow-line Seyfert 1 galaxy at redshift 0.078 and is among the brightest active galaxies monitored by the Kepler mission. We have carried out a reverberation mapping campaign designed to measure the broad-line region size and estimate the mass of the black hole in this galaxy. We obtained 74 epochs of spectroscopic data using the Kast Spectrograph at the Lick 3-m telescope from February to November of 2012, and obtained complementary V-band images from five other ground-based telescopes. We measured the H-beta light curve lag with respect to the V-band continuum light curve using both cross-correlation techniques (CCF) and continuum light curve variability modeling with the JAVELIN method, and found rest-frame lags of lag_CCF = 13.53 (+2.03, -2.32) days and lag_JAVELIN = 13.15 (+1.08, -1.00) days. The H-beta root-mean-square line profile has a width of sigma_line = 770 +/- 49 km/s. Combining these two results and assuming a virial scale factor of f = 5.13, we obtained a virial estimate of M_BH = 8.06 (+1.59, -1.72) x 10^6 M_sun for the mass of the central black hole and an Eddington ratio of L/L_Edd ~ 0.2. We also obtained consistent but slightly shorter emission-line lags with respect to the Kepler light curve. Thanks to the Kepler mission, the light curve of KA1858+4850 has among the highest cadences and signal-to-noise ratios ever measured for an active galactic nucleus; thus, our black hole mass measurement will serve as a reference point for relations between black hole mass and continuum variability characteristics in active galactic nuclei

    Evidence synthesis as the basis for decision analysis: a method of selecting the best agricultural practices for multiple ecosystem services

    Get PDF
    Agricultural management practices have impacts not only on crops and livestock, but also on soil, water, wildlife, and ecosystem services. Agricultural research provides evidence about these impacts, but it is unclear how this evidence should be used to make decisions. Two methods are widely used in decision making: evidence synthesis and decision analysis. However, a system of evidence-based decision making that integrates these two methods has not yet been established. Moreover, the standard methods of evidence synthesis have a narrow focus (e.g., the effects of one management practice), but the standard methods of decision analysis have a wide focus (e.g., the comparative effectiveness of multiple management practices). Thus, there is a mismatch between the outputs from evidence synthesis and the inputs that are needed for decision analysis. We show how evidence for a wide range of agricultural practices can be reviewed and summarized simultaneously (“subject-wide evidence synthesis”), and how this evidence can be assessed by experts and used for decision making (“multiple-criteria decision analysis”). We show how these methods could be used by The Nature Conservancy (TNC) in California to select the best management practices for multiple ecosystem services in Mediterranean-type farmland and rangeland, based on a subject-wide evidence synthesis that was published by Conservation Evidence (www.conservationevidence.com). This method of “evidence-based decision analysis” could be used at different scales, from the local scale (farmers deciding which practices to adopt) to the national or international scale (policy makers deciding which practices to support through agricultural subsidies or other payments for ecosystem services). We discuss the strengths and weaknesses of this method, and we suggest some general principles for improving evidence synthesis as the basis for multi-criteria decision analysis

    Inherent limits of light-level geolocation may lead to over-interpretation

    Get PDF
    In their 2015 Current Biology paper, Streby et al. [1] reported that Golden-winged Warblers (Vermivora chrysoptera), which had just migrated to their breeding location in eastern Tennessee, performed a facultative and up to “>1,500 km roundtrip” to the Gulf of Mexico to avoid a severe tornadic storm. From light-level geolocator data, wherein geographical locations are estimated via the timing of sunrise and sunset, Streby et al. [1] concluded that the warblers had evacuated their breeding area approximately 24 hours before the storm and returned about five days later. The authors presented this finding as evidence that migratory birds avoid severe storms by temporarily moving long-distances. However, the tracking method employed by Streby et al. [1] is prone to considerable error and uncertainty. Here, we argue that this interpretation of the data oversteps the limits of the used tracking technique. By calculating the expected geographical error range for the tracked birds, we demonstrate that the hypothesized movements fell well within the geolocators’ inherent error range for this species and that such deviations in latitude occur frequently even if individuals remain stationary

    Communications Biophysics

    Get PDF
    Contains research objectives, summary of research and reports on three research projects.National Institutes of Health (Grant 5 PO1 GM14940-04)National Institutes of Health (Grant 5 TOl GM01555-04)National Aeronautics and Space Administration (Grant NGL 22-009-304
    • …
    corecore