330,210 research outputs found

    Extinction curve template for intrinsically reddened quasars

    Full text link
    We analyze the near-infrared to UV data of 16 quasars with redshifts ranging from 0.71 << zz << 2.13 to investigate dust extinction properties. The sample presented in this work is obtained from the High AVA_V Quasar (HAQ) survey. The quasar candidates were selected from the Sloan Digital Sky Survey (SDSS) and the UKIRT Infrared Deep Sky Survey (UKIDSS), and follow-up spectroscopy was carried out at the Nordic Optical Telescope (NOT) and the New Technology Telescope (NTT). To study dust extinction curves intrinsic to the quasars, from the HAQ survey we selected 16 cases where the Small Magellanic Cloud (SMC) law could not provide a good solution to the spectral energy distributions (SEDs). We derived the extinction curves using Fitzpatrick & Massa 1986 (FM) law by comparing the observed SEDs to the combined quasar template from Vanden Berk et al. 2001 and Glikman et al. 2006. The derived extinction, AVA_V, ranges from 0.2-1.0 mag. All the individual extinction curves of our quasars are steeper (RV=2.2R_V=2.2-2.7) than that of the SMC, with a weighted mean value of RV=2.4R_V=2.4. We derive an `average quasar extinction curve' for our sample by fitting SEDs simultaneously by using the weighted mean values of the FM law parameters and a varying RVR_V. The entire sample is well fit with a single best-fit value of RV=2.2±0.2R_V=2.2\pm0.2. The `average quasar extinction curve' deviates from the steepest Milky Way and SMC extinction curves at a confidence level ≳95%\gtrsim95\%. Such steep extinction curves suggest a significant population of silicates to produce small dust grains. Moreover, another possibility could be that the large dust grains may have been destroyed by the activity of the nearby active galactic nuclei (AGN), resulting in steep extinction curves.Comment: 8 pages, 4 figures, 1 tabl

    A questionnaire to identify patellofemoral pain in the community: an exploration of measurement properties

    Get PDF
    Background Community-based studies of patellofemoral pain (PFP) need a questionnaire tool that discriminates between those with and those without the condition. To overcome these issues, we have designed a self-report questionnaire which aims to identify people with PFP in the community. Methods Study designs: comparative study and cross-sectional study. Study population: comparative study: PFP patients, soft-tissue injury patients and adults without knee problems. Cross-sectional study: adults attending a science festival. Intervention: comparative study participants completed the questionnaire at baseline and two weeks later. Cross-sectional study participants completed the questionnaire once. The optimal scoring system and threshold was explored using receiver operating characteristic curves, test-retest reliability using Cohen’s kappa and measurement error using Bland-Altman plots and standard error of measurement. Known-group validity was explored by comparing PFP prevalence between genders and age groups. Results Eighty-four participants were recruited to the comparative study. The receiver operating characteristic curves suggested limiting the questionnaire to the clinical features and knee pain map sections (AUC 0.97 95 % CI 0.94 to 1.00). This combination had high sensitivity and specificity (over 90 %). Measurement error was less than the mean difference between the groups. Test–retest reliability estimates suggest good agreement (N = 51, k = 0.74, 95 % CI 0.52–0.91). The cross-sectional study (N = 110) showed expected differences between genders and age groups but these were not statistically significant. Conclusion A shortened version of the questionnaire, based on clinical features and a knee pain map, has good measurement properties. Further work is needed to validate the questionnaire in community samples

    Implications of Malthus-Boserup Ratcheting for Interpreting the Archaeological Record

    Get PDF
    Prehistoric populations across North America seem to grow exponentially, with some variation between regions. Archaeologists have explored the differences somewhat, but have not explained the differences or the sustained growth with any reference to what may be going on under the surface in a way that is relevant to all regions. I propose that environmental limits on population are shaped by what populations eat and how they acquire food, and that when populations are large enough to feel the scarcity in their environment, they change their way of life in a way that increases those limits. The model I propose is well-established, and is called the Malthus-Boserup ratcheting model. I mathematically describe the Malthus-Boserup ratcheting model using a fixed population growth rate and an array of changes in environmental limit, both in the amount of change and rate of change (or how quickly my imaginary populations change their way of life). I then simplify my descriptions of these curves using exponential curves, as archaeologists might simplify real population growth curves in much the same way. I compare the models with their exponential descriptions to form expectations for what the values in exponential curves might mean regarding the archaeological record. Despite having set a constant population growth rate, the exponential curves grow at different rates, depending primarily on the amount of change in environmental population limits. I hypothesize that a list of regions assembled in order of largest to smallest change in population limits should match a list of the same regions assembled in order of exponential growth rate. I use tools in R statistical software to describe the population growth curves in four regions using both exponential curves and summed logistic curves. I then arrange the list of regions according to both the growth rate of the exponential curves and the change in the limits found in the logistic curves. The lists do not match, which suggests that, for a variety of reasons, the exponential curves do not adequately describe the underlying Malthus-Boserup ratcheting process. I then compare the models using the Bayesian Information Criterion. Bayesian Information Criterion is an indicator that increases with both the unexplained variation in the dependent variable and the number of explanatory variables used. In this sense, when comparing models describing the same data, lower values suggest either that information retained outweighs a model’s complexity, or a model’s simplicity outweighs the amount of information lost. In all four cases, the information retained in the Malthus-Boserup ratcheting model outweighs the model’s complexity. Furthermore, the summed logistic models have parameters that researchers can interpret beyond simple rates of change

    Microlensing of Lensed Supernovae

    Full text link
    Given the number of recently discovered galaxy-galaxy lens systems, we anticipate that a gravitationally lensed supernova will be observed within the next few years. We explore the possibility that stars in the lens galaxy will produce observable microlensing fluctuations in lensed supernova light curves. For typical parameters, we predict that ~70% of lensed SNe will show microlensing fluctuations > 0.5 mag, while ~25% will have fluctuations > 1 mag. Thus microlensing of lensed supernova will be both ubiquitous and observable. Additionally, we show that microlensing fluctuations will complicate measurements of time delays from multiply imaged supernovae: time delays accurate to better than a few days will be difficult to obtain. We also consider prospects for extracting the lens galaxy's stellar mass fraction and mass function from microlensing fluctuations via a new statistical measure, the time-weighted light curve derivative.Comment: 13 pages, emulateapj format; accepted in ApJ; expanded discussion of time delay uncertaintie

    Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF-36

    Get PDF
    We describe and compare four different methods for estimating sample size and power, when the primary outcome of the study is a Health Related Quality of Life (HRQoL) measure. These methods are: 1. assuming a Normal distribution and comparing two means; 2. using a non-parametric method; 3. Whitehead's method based on the proportional odds model; 4. the bootstrap. We illustrate the various methods, using data from the SF-36. For simplicity this paper deals with studies designed to compare the effectiveness (or superiority) of a new treatment compared to a standard treatment at a single point in time. The results show that if the HRQoL outcome has a limited number of discrete values (< 7) and/or the expected proportion of cases at the boundaries is high (scoring 0 or 100), then we would recommend using Whitehead's method (Method 3). Alternatively, if the HRQoL outcome has a large number of distinct values and the proportion at the boundaries is low, then we would recommend using Method 1. If a pilot or historical dataset is readily available (to estimate the shape of the distribution) then bootstrap simulation (Method 4) based on this data will provide a more accurate and reliable sample size estimate than conventional methods (Methods 1, 2, or 3). In the absence of a reliable pilot set, bootstrapping is not appropriate and conventional methods of sample size estimation or simulation will need to be used. Fortunately, with the increasing use of HRQoL outcomes in research, historical datasets are becoming more readily available. Strictly speaking, our results and conclusions only apply to the SF-36 outcome measure. Further empirical work is required to see whether these results hold true for other HRQoL outcomes. However, the SF-36 has many features in common with other HRQoL outcomes: multi-dimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions, so therefore, we believe these results and conclusions using the SF-36 will be appropriate for other HRQoL measures

    Diversity of Decline-Rate-Corrected Type Ia Supernova Rise Times: One Mode or Two?

    Get PDF
    B-band light-curve rise times for eight unusually well-observed nearby Type Ia supernovae (SNe) are fitted by a newly developed template-building algorithm, using light-curve functions that are smooth, flexible, and free of potential bias from externally derived templates and other prior assumptions. From the available literature, photometric BVRI data collected over many months, including the earliest points, are reconciled, combined, and fitted to a unique time of explosion for each SN. On average, after they are corrected for light-curve decline rate, three SNe rise in 18.81 +- 0.36 days, while five SNe rise in 16.64 +- 0.21 days. If all eight SNe are sampled from a single parent population (a hypothesis not favored by statistical tests), the rms intrinsic scatter of the decline-rate-corrected SN rise time is 0.96 +0.52 -0.25 days -- a first measurement of this dispersion. The corresponding global mean rise time is 17.44 +- 0.39 days, where the uncertainty is dominated by intrinsic variance. This value is ~2 days shorter than two published averages that nominally are twice as precise, though also based on small samples. When comparing high-z to low-z SN luminosities for determining cosmological parameters, bias can be introduced by use of a light-curve template with an unrealistic rise time. If the period over which light curves are sampled depends on z in a manner typical of current search and measurement strategies, a two-day discrepancy in template rise time can bias the luminosity comparison by ~0.03 magnitudes.Comment: As accepted by The Astrophysical Journal; 15 pages, 6 figures, 2 tables. Explanatory material rearranged and enhanced; Fig. 4 reformatte
    • …
    corecore