23 research outputs found

    Relation between hospital orthopaedic specialisation and outcomes in patients aged 65 and older: retrospective analysis of US Medicare data

    Get PDF
    Objective To explore the relation between hospital orthopaedic specialisation and postoperative outcomes after total hip or knee replacement surgery

    Interatomic potentials for atomistic simulations of the Ti-Al system

    Full text link
    Semi-empirical interatomic potentials have been developed for Al, alpha-Ti, and gamma-TiAl within the embedded atomic method (EAM) by fitting to a large database of experimental as well as ab-initio data. The ab-initio calculations were performed by the linear augmented plane wave (LAPW) method within the density functional theory to obtain the equations of state for a number of crystal structures of the Ti-Al system. Some of the calculated LAPW energies were used for fitting the potentials while others for examining their quality. The potentials correctly predict the equilibrium crystal structures of the phases and accurately reproduce their basic lattice properties. The potentials are applied to calculate the energies of point defects, surfaces, planar faults in the equilibrium structures. Unlike earlier EAM potentials for the Ti-Al system, the proposed potentials provide reasonable description of the lattice thermal expansion, demonstrating their usefulness in the molecular dynamics or Monte Carlo studies at high temperatures. The energy along the tetragonal deformation path (Bain transformation) in gamma-TiAl calculated with the EAM potential is in a fairly good agreement with LAPW calculations. Equilibrium point defect concentrations in gamma-TiAl are studied using the EAM potential. It is found that antisite defects strongly dominate over vacancies at all compositions around stoichiometry, indicating that gamm-TiAl is an antisite disorder compound in agreement with experimental data.Comment: 46 pages, 6 figures (Physical Review B, in press

    A Bayesian active learning strategy for sequential experimental design in systems biology

    Get PDF
    International audienceBorrowing ideas from Bayesian experimental design and active learning, we propose a new strategy for optimal experimental design in the context of kinetic parameter estimation in systems biology. We describe algorithmic choices that allow to implement this method in a computationally tractable way and make it fully automatic. Based on simulation, we show that it outperforms alternative baseline strategies, and demonstrate the benefit to consider multiple posterior modes of the likelihood landscape, as opposed to traditional schemes based on local and Gaussian approximations. An R package is provided to reproduce all experimental simulations

    Spectroscopic Needs for Imaging Dark Energy Experiments

    No full text
    White paper for the "Dark Energy and CMB" working group for the American Physical Society's Division of Particles and Fields long-term planning exercise ("Snowmass")International audienceOngoing and near-future imaging-based dark energy experiments are critically dependent upon photometric redshifts (a.k.a. photo-z's): i.e., estimates of the redshifts of objects based only on flux information obtained through broad filters. Higher-quality, lower-scatter photo-z's will result in smaller random errors on cosmological parameters; while systematic errors in photometric redshift estimates, if not constrained, may dominate all other uncertainties from these experiments. The desired optimization and calibration is dependent upon spectroscopic measurements for secure redshift information; this is the key application of galaxy spectroscopy for imaging-based dark energy experiments. Hence, to achieve their full potential, imaging-based experiments will require large sets of objects with spectroscopically-determined redshifts, for two purposes: * Training: Objects with known redshift are needed to map out the relationship between object color and z (or, equivalently, to determine empirically-calibrated templates describing the rest-frame spectra of the full range of galaxies, which may be used to predict the color-z relation). The ultimate goal of training is to minimize each moment of the distribution of differences between photometric redshift estimates and the true redshifts of objects, making the relationship between them as tight as possible. The larger and more complete our "training set" of spectroscopic redshifts is, the smaller the RMS photo-z errors should be, increasing the constraining power of imaging experiments. Requirements: Spectroscopic redshift measurements for ∼30,000 objects over >∼15 widely-separated regions, each at least ∼20 arcmin in diameter, and reaching the faintest objects used in a given experiment, will likely be necessary if photometric redshifts are to be trained and calibrated with conventional techniques. Larger, more complete samples (i.e., with longer exposure times) can improve photo-z algorithms and reduce scatter further, enhancing the science return from planned experiments greatly (increasing the Dark Energy Task Force figure of merit by up to ∼50%). Options: This spectroscopy will most efficiently be done by covering as much of the optical and near-infrared spectrum as possible at modestly high spectral resolution (λ/Δλ > ∼3000), while maximizing the telescope collecting area, field of view on the sky, and multiplexing of simultaneous spectra. The most efficient instrument for this would likely be either the proposed GMACS/MANIFEST spectrograph for the Giant Magellan Telescope or the OPTIMOS spectrograph for the European Extremely Large Telescope, depending on actual properties when built. The PFS spectrograph at Subaru would be next best and available considerably earlier, c. 2018; the proposed ngCFHT and SSST telescopes would have similar capabilities but start later. Other key options, in order of increasing total time required, are the WFOS spectrograph at TMT, MOONS at the VLT, and DESI at the Mayall 4 m telescope (or the similar 4MOST and WEAVE projects); of these, only DESI, MOONS, and PFS are expected to be available before 2020. Table 3 of this white paper summarizes the observation time required at each facility for strawman training samples. To attain secure redshift measurements for a high fraction of targeted objects and cover the full redshift span of future experiments, additional near-infrared spectroscopy will also be required; this is best done from space, particularly with WFIRST-2.4 and JWST. Calibration: The first several moments of redshift distributions (the mean, RMS redshift dispersion, etc.), must be known to high accuracy for cosmological constraints not to be systematics-dominated (equivalently, the moments of the distribution of differences between photometric and true redshifts could be determined instead). The ultimate goal of calibration is to characterize these moments for every subsample used in analyses - i.e., to minimize the uncertainty in their mean redshift, RMS dispersion, etc. - rather than to make the moments themselves small. Calibration may be done with the same spectroscopic dataset used for training if that dataset is extremely high in redshift completeness (i.e., no populations of galaxies to be used in analyses are systematically missed). Accurate photo-z calibration is necessary for all imaging experiments. Requirements: If extremely low levels of systematic incompleteness (<∼0.1%) are attained in training samples, the same datasets described above should be sufficient for calibration. However, existing deep spectroscopic surveys have failed to yield secure redshifts for 30-60% of targets, so that would require very large improvements over past experience. This incompleteness would be a limiting factor for training, but catastrophic for calibration. If <∼0.1% incompleteness is not attainable, the best known option for calibration of photometric redshifts is to utilize cross-correlation statistics in some form. The most direct method for this uses cross-correlations between positions on the sky of bright objects of known spectroscopic redshift with the sample of objects that we wish to calibrate the redshift distribution for, measured as a function of spectroscopic z. For such a calibration, redshifts of ∼100,000 objects over at least several hundred square degrees, spanning the full redshift range of the samples used for dark energy, would be necessary. Options: The proposed BAO experiment eBOSS would provide sufficient spectroscopy for basic calibrations, particularly for ongoing and near-future imaging experiments. The planned DESI experiment would provide excellent calibration with redundant cross-checks, but will start after the conclusion of some imaging projects. An extension of DESI to the Southern hemisphere would provide the best possible calibration from cross-correlation methods for DES and LSST. We thus anticipate that our two primary needs for spectroscopy - training and calibration of photometric redshifts - will require two separate solutions. For ongoing and future projects to reach their full potential, new spectroscopic samples of faint objects will be needed for training; those new samples may be suitable for calibration, but the latter possibility is uncertain. In contrast, wide-area samples of bright objects are poorly suited for training, but can provide high-precision calibrations via cross-correlation techniques. Additional training/calibration redshifts and/or host galaxy spectroscopy would enhance the use of supernovae and galaxy clusters for cosmology. We also summarize additional work on photometric redshift techniques that will be needed to prepare for data from ongoing and future dark energy experiments
    corecore