787 research outputs found

    Distances from Surface Brightness Fluctuations

    Get PDF
    The practice of measuring galaxy distances from their spatial fluctuations in surface brightness is now a decade old. While several past articles have included some review material, this is the first intended as a comprehensive review of the surface brightness fluctuation (SBF) method. The method is conceptually quite simple, the basic idea being that nearby (but unresolved) star clusters and galaxies appear "bumpy", while more distant ones appear smooth. This is quantified via a measurement of the amplitude of the Poisson fluctuations in the number of unresolved stars encompassed by a CCD pixel (usually in an image of an elliptical galaxy). Here, we describe the technical details and difficulties involved in making SBF measurements, discuss theoretical and empirical calibrations of the method, and review the numerous applications of the method from the ground and space, in the optical and near-infrared. We include discussions of stellar population effects and the "universality" of the SBF standard candle. A final section considers the future of the method.Comment: Invited review article to appear in: `Post-Hipparcos Cosmic Candles', A. Heck & F. Caputo (Eds), Kluwer Academic Publ., Dordrecht, in press. 22 pages, including 3 postscript figures; uses Kluwer's crckapb.sty LaTex macro file, enclose

    Risk prediction models with incomplete data with application to prediction of estrogen receptor-positive breast cancer: prospective data from the Nurses' Health Study

    Get PDF
    Introduction A number of breast cancer risk prediction models have been developed to provide insight into a woman\u27s individual breast cancer risk. Although circulating levels of estradiol in postmenopausal women predict subsequent breast cancer risk, whether the addition of estradiol levels adds significantly to a model\u27s predictive power has not previously been evaluated. Methods Using linear regression, the authors developed an imputed estradiol score using measured estradiol levels (the outcome) and both case status and risk factor data (for example, body mass index) from a nested case-control study conducted within a large prospective cohort study and used multiple imputation methods to develop an overall risk model including both risk factor data from the main cohort and estradiol levels from the nested case-control study. Results The authors evaluated the addition of imputed estradiol level to the previously published Rosner and Colditz log-incidence model for breast cancer risk prediction within the larger Nurses\u27 Health Study cohort. The follow-up was from 1980 to 2000; during this time, 1,559 invasive estrogen receptor-positive breast cancer cases were confirmed. The addition of imputed estradiol levels significantly improved risk prediction; the age-specific concordance statistic increased from 0.635 ± 0.007 to 0.645 ± 0.007 (P \u3c 0.001) after the addition of imputed estradiol. Conclusion Circulating estradiol levels in postmenopausal women appear to add to other lifestyle factors in predicting a woman\u27s individual risk of breast cancer

    Holographic Wilsonian flows and emergent fermions in extremal charged black holes

    Full text link
    We study holographic Wilsonian RG in a general class of asymptotically AdS backgrounds with a U(1) gauge field. We consider free charged Dirac fermions in such a background, and integrate them up to an intermediate radial distance, yielding an equivalent low energy dual field theory. The new ingredient, compared to scalars, involves a `generalized' basis of coherent states which labels a particular half of the fermion components as coordinates or momenta, depending on the choice of quantization (standard or alternative). We apply this technology to explicitly compute RG flows of charged fermionic operators and their composites (double trace operators) in field theories dual to (a) pure AdS and (b) extremal charged black hole geometries. The flow diagrams and fixed points are determined explicitly. In the case of the extremal black hole, the RG flows connect two fixed points at the UV AdS boundary to two fixed points at the IR AdS_2 region. The double trace flow is shown, both numerically and analytically, to develop a pole singularity in the AdS_2 region at low frequency and near the Fermi momentum, which can be traced to the appearance of massless fermion modes on the low energy cut-off surface. The low energy field theory action we derive exactly agrees with the semi-holographic action proposed by Faulkner and Polchinski in arXiv:1001.5049 [hep-th]. In terms of field theory, the holographic version of Wilsonian RG leads to a quantum theory with random sources. In the extremal black hole background the random sources become `light' in the AdS_2 region near the Fermi surface and emerge as new dynamical degrees of freedom.Comment: 37 pages (including 8 pages of appendix), 10 figures and 2 table

    SN 2005hj: Evidence for Two Classes of Normal-Bright SNe Ia and Implications for Cosmology

    Get PDF
    HET Optical spectra covering the evolution from about 6 days before to about 5 weeks after maximum light and the ROTSE-IIIb unfiltered light curve of the "Branch-normal" Type Ia Supernova SN 2005hj are presented. The host galaxy shows HII region lines at redshift of z=0.0574, which puts the peak unfiltered absolute magnitude at a somewhat over-luminous -19.6. The spectra show weak and narrow SiII lines, and for a period of at least 10 days beginning around maximum light these profiles do not change in width or depth and they indicate a constant expansion velocity of ~10,600 km/s. We analyzed the observations based on detailed radiation dynamical models in the literature. Whereas delayed detonation and deflagration models have been used to explain the majority of SNe Ia, they do not predict a long velocity plateau in the SiII minimum with an unvarying line profile. Pulsating delayed detonations and merger scenarios form shell-like density structures with properties mostly related to the mass of the shell, M_shell, and we discuss how these models may explain the observed SiII line evolution; however, these models are based on spherical calculations and other possibilities may exist. SN 2005hj is consistent with respect to the onset, duration, and velocity of the plateau, the peak luminosity and, within the uncertainties, with the intrinsic colors for models with M_shell=0.2 M_sun. Our analysis suggests a distinct class of events hidden within the Branch-normal SNe Ia. If the predicted relations between observables are confirmed, they may provide a way to separate these two groups. We discuss the implications of two distinct progenitor classes on cosmological studies employing SNe Ia, including possible differences in the peak luminosity to light curve width relation.Comment: ApJ accepted, 31 page

    The Hubble Constant

    Get PDF
    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0H_0 values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68km/s/Mpc and typical errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.Comment: Extensively revised and updated since the 2007 version: accepted by Living Reviews in Relativity as a major (2014) update of LRR 10, 4, 200

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    Percentile reference values for anthropometric body composition indices in European children from the IDEFICS study

    Get PDF
    INTRODUCTION: To characterise the nutritional status in children with obesity or wasting conditions, European anthropometric reference values for body composition measures beyond the body mass index (BMI) are needed. Differentiated assessment of body composition in children has long been hampered by the lack of appropriate references. OBJECTIVES: The aim of our study is to provide percentiles for body composition indices in normal weight European children, based on the IDEFICS cohort (Identification and prevention of Dietary-and lifestyle-induced health Effects in Children and infantS). METHODS: Overall 18 745 2.0-10.9-year-old children from eight countries participated in the study. Children classified as overweight/obese or underweight according to IOTF (N = 5915) were excluded from the analysis. Anthropometric measurements (BMI (N = 12 830); triceps, subscapular, fat mass and fat mass index (N = 11 845-11 901); biceps, suprailiac skinfolds, sum of skinfolds calculated from skinfold thicknesses (N = 8129-8205), neck circumference (N = 12 241); waist circumference and waist-to-height ratio (N = 12 381)) were analysed stratified by sex and smoothed 1st, 3rd, 10th, 25th, 50th, 75th, 90th, 97th and 99th percentile curves were calculated using GAMLSS. RESULTS: Percentile values of the most important anthropometric measures related to the degree of adiposity are depicted for European girls and boys. Age-and sex-specific differences were investigated for all measures. As an example, the 50th and 99th percentile values of waist circumference ranged from 50.7-59.2 cm and from 51.3-58.7 cm in 4.5-to < 5.0-year-old girls and boys, respectively, to 60.6-74.5 cm in girls and to 59.9-76.7 cm in boys at the age of 10.5-10.9 years. CONCLUSION: The presented percentile curves may aid a differentiated assessment of total and abdominal adiposity in European children

    Activation of superior colliculi in humans during visual exploration

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Visual, oculomotor, and – recently – cognitive functions of the superior colliculi (SC) have been documented in detail in non-human primates in the past. Evidence for corresponding functions of the SC in humans is still rare. We examined activity changes in the human tectum and the lateral geniculate nuclei (LGN) in a visual search task using functional magnetic resonance imaging (fMRI) and anatomically defined regions of interest (ROI). Healthy subjects conducted a free visual search task and two voluntary eye movement tasks with and without irrelevant visual distracters. Blood oxygen level dependent (BOLD) signals in the SC were compared to activity in the inferior colliculi (IC) and LGN.</p> <p>Results</p> <p>Neural activity increased during free exploration only in the SC in comparison to both control tasks. Saccade frequency did not exert a significant effect on BOLD signal changes. No corresponding differences between experimental tasks were found in the IC or the LGN. However, while the IC revealed no signal increase from the baseline, BOLD signal changes at the LGN were consistently positive in all experimental conditions.</p> <p>Conclusion</p> <p>Our data demonstrate the involvement of the SC in a visual search task. In contrast to the results of previous studies, signal changes could not be seen to be driven by either visual stimulation or oculomotor control on their own. Further, we can exclude the influence of any nearby neural structures (e.g. pulvinar, tegmentum) or of typical artefacts at the brainstem on the observed signal changes at the SC. Corresponding to findings in non-human primates, our data support a dependency of SC activity on functions beyond oculomotor control and visual processing.</p

    What counts as reliable evidence for public health policy: the case of circumcision for preventing HIV infection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is an ongoing controversy over the relative merits of randomized controlled trials (RCTs) and non-randomized observational studies in assessing efficacy and guiding policy. In this paper we examine male circumcision to prevent HIV infection as a case study that can illuminate the appropriate role of different types of evidence for public health interventions.</p> <p>Discussion</p> <p>Based on an analysis of two Cochrane reviews, one published in 2003 before the results of three RCTs, and one in 2009, we argue that if we rely solely on evidence from RCTs and exclude evidence from well-designed non-randomized studies, we limit our ability to provide sound public health recommendations. Furthermore, the bias in favor of RCT evidence has delayed research on policy relevant issues.</p> <p>Summary</p> <p>This case study of circumcision and HIV prevention demonstrates that if we rely solely on evidence from RCTs and exclude evidence from well-designed non-randomized studies, we limit our ability to provide sound public health recommendations.</p

    Accounting for Population Stratification in Practice: A Comparison of the Main Strategies Dedicated to Genome-Wide Association Studies

    Get PDF
    Genome-Wide Association Studies are powerful tools to detect genetic variants associated with diseases. Their results have, however, been questioned, in part because of the bias induced by population stratification. This is a consequence of systematic differences in allele frequencies due to the difference in sample ancestries that can lead to both false positive or false negative findings. Many strategies are available to account for stratification but their performances differ, for instance according to the type of population structure, the disease susceptibility locus minor allele frequency, the degree of sampling imbalanced, or the sample size. We focus on the type of population structure and propose a comparison of the most commonly used methods to deal with stratification that are the Genomic Control, Principal Component based methods such as implemented in Eigenstrat, adjusted Regressions and Meta-Analyses strategies. Our assessment of the methods is based on a large simulation study, involving several scenarios corresponding to many types of population structures. We focused on both false positive rate and power to determine which methods perform the best. Our analysis showed that if there is no population structure, none of the tests led to a bias nor decreased the power except for the Meta-Analyses. When the population is stratified, adjusted Logistic Regressions and Eigenstrat are the best solutions to account for stratification even though only the Logistic Regressions are able to constantly maintain correct false positive rates. This study provides more details about these methods. Their advantages and limitations in different stratification scenarios are highlighted in order to propose practical guidelines to account for population stratification in Genome-Wide Association Studies
    corecore