1,548,802 research outputs found

    Measurement error in long-term retrospective recall surveys of earnings

    Get PDF
    Several recent studies in labour and population economics use retrospective surveys to substitute for the high cost and limited availability of longitudinal survey data. Although a single interview can obtain a lifetime history, inaccurate long-term recall could make such retrospective surveys a poor substitute for longitudinal surveys, especially if it induces non-classical error that makes conventional statistical corrections less effective. In this paper, we use the unique Panel Study of Income Dynamics Validation Study to assess the accuracy of long-term recall data. We find underreporting of transitory events. This recall error creates a non-classical measurement error problem. A limited cost-benefit analysis is also conducted, showing how savings from using a cheaper retrospective recall survey might be compared with the cost of applying the less accurate recall data to a specific policy objective such as designing transfers to reduce chronic poverty

    Standard survey methods for estimating colony losses and explanatory risk factors in Apis mellifera

    Get PDF
    This chapter addresses survey methodology and questionnaire design for the collection of data pertaining to estimation of honey bee colony loss rates and identification of risk factors for colony loss. Sources of error in surveys are described. Advantages and disadvantages of different random and non-random sampling strategies and different modes of data collection are presented to enable the researcher to make an informed choice. We discuss survey and questionnaire methodology in some detail, for the purpose of raising awareness of issues to be considered during the survey design stage in order to minimise error and bias in the results. Aspects of survey design are illustrated using surveys in Scotland. Part of a standardized questionnaire is given as a further example, developed by the COLOSS working group for Monitoring and Diagnosis. Approaches to data analysis are described, focussing on estimation of loss rates. Dutch monitoring data from 2012 were used for an example of a statistical analysis with the public domain R software. We demonstrate the estimation of the overall proportion of losses and corresponding confidence interval using a quasi-binomial model to account for extra-binomial variation. We also illustrate generalized linear model fitting when incorporating a single risk factor, and derivation of relevant confidence intervals

    Error analysis in cross-correlation of sky maps: application to the ISW detection

    Full text link
    Constraining cosmological parameters from measurements of the Integrated Sachs-Wolfe effect requires developing robust and accurate methods for computing statistical errors in the cross-correlation between maps. This paper presents a detailed comparison of such error estimation applied to the case of cross-correlation of Cosmic Microwave Background (CMB) and large-scale structure data. We compare theoretical models for error estimation with montecarlo simulations where both the galaxy and the CMB maps vary around a fiducial auto-correlation and cross-correlation model which agrees well with the current concordance LCDM cosmology. Our analysis compares estimators both in harmonic and configuration (or real) space, quantifies the accuracy of the error analysis and discuss the impact of partial sky survey area and the choice of input fiducial model on dark-energy constraints. We show that purely analytic approaches yield accurate errors even in surveys that cover only 10% of the sky and that parameter constraints strongly depend on the fiducial model employed. Alternatively, we discuss the advantages and limitations of error estimators that can be directly applied to data. In particular, we show that errors and covariances from the Jack-Knife method agree well with the theoretical approaches and simulations. We also introduce a novel method in real space that is computationally efficient and can be applied to real data and realistic survey geometries. Finally, we present a number of new findings and prescriptions that can be useful for analysis of real data and forecasts, and present a critical summary of the analyses done to date.Comment: submitted to MNRAS, 26 page

    The SDSS Coadd: A Galaxy Photometric Redshift Catalog

    Get PDF
    We present and describe a catalog of galaxy photometric redshifts (photo-z's) for the Sloan Digital Sky Survey (SDSS) Coadd Data. We use the Artificial Neural Network (ANN) technique to calculate photo-z's and the Nearest Neighbor Error (NNE) method to estimate photo-z errors for \sim 13 million objects classified as galaxies in the coadd with r<24.5r < 24.5. The photo-z and photo-z error estimators are trained and validated on a sample of 83,000\sim 83,000 galaxies that have SDSS photometry and spectroscopic redshifts measured by the SDSS Data Release 7 (DR7), the Canadian Network for Observational Cosmology Field Galaxy Survey (CNOC2), the Deep Extragalactic Evolutionary Probe Data Release 3(DEEP2 DR3), the VIsible imaging Multi-Object Spectrograph - Very Large Telescope Deep Survey (VVDS) and the WiggleZ Dark Energy Survey. For the best ANN methods we have tried, we find that 68% of the galaxies in the validation set have a photo-z error smaller than σ68=0.031\sigma_{68} =0.031. After presenting our results and quality tests, we provide a short guide for users accessing the public data.Comment: 16 pages, 13 figures, submitted to ApJ. Analysis updated to remove proprietary BOSS data comprising small fraction (8%) of original spectroscopic training set and erroneously included. Changes in results are small compared to the errors and the conclusions are unaffected. arXiv admin note: substantial text overlap with arXiv:0708.003

    Breaking the Degeneracy: Optimal Use of Three-point Weak Lensing Statistics

    Full text link
    We study the optimal use of third order statistics in the analysis of weak lensing by large-scale structure. These higher order statistics have long been advocated as a powerful tool to break measured degeneracies between cosmological parameters. Using ray-tracing simulations, incorporating important survey features such as a realistic depth-dependent redshift distribution, we find that a joint two- and three-point correlation function analysis is a much stronger probe of cosmology than the skewness statistic. We compare different observing strategies, showing that for a limited survey time there is an optimal depth for the measurement of third-order statistics, which balances statistical noise and cosmic variance against signal amplitude. We find that the chosen CFHTLS observing strategy was optimal and forecast that a joint two- and three-point analysis of the completed CFHTLS-Wide will constrain the amplitude of the matter power spectrum σ8\sigma_8 to 10% and the matter density parameter Ωm\Omega_m to 17%, a factor of ~2.5 improvement on the two-point analysis alone. Our error analysis includes all non-Gaussian terms, finding that the coupling between cosmic variance and shot noise is a non-negligible contribution which should be included in any future analytical error calculations.Comment: 27 pages, 13 figures, 3 table

    Analysis of Compounded Pharmaceutical Products to Teach the Importance of Quality in an Applied Pharmaceutics Laboratory Course

    Get PDF
    Objective. To assess the effectiveness of a product-analysis laboratory exercise in teaching students the importance of quality in pharmaceutical compounding. Design. Second-year pharmacy students (N=77) participated in a pharmaceutical compounding laboratory exercise and subsequently analyzed their final product using ultraviolet (UV) spectrometry. Assessment. Reflection, survey instruments, and quiz questions were used to measure how well students understood the importance of quality in their compounded products. Product analysis showed that preparations compounded by students had an error range of 0.6% to 140%, with an average error of 23.7%. Students’ reflections cited common sources of error, including inaccurate weighing, contamination, and product loss during both the compounding procedure and preparation of the sample for analysis. Ninety percent of students agreed that the exercise improved their understanding of the importance of quality in compounded pharmaceutical products. Most students (85.7%) reported that this exercise inspired them to be more diligent in their preparation of compounded products in their future careers. Conclusion. Integrating an analytical assessment during a pharmaceutical compounding laboratory can enhance students’ understanding of quality of compounded pharmaceutical products. It can also provide students a chance to reflect on sources of error to improve their compounding technique in the future

    Errors in survey reports of consumption expenditures

    Get PDF
    This paper considers data quality issues for the analysis of consumption inequality exploiting two complementary datasets from the Consumer Expenditure Survey for the United States. The Interview sample follows survey households over four calendar quarters and consists of retrospectively asked information about monthly expenditures on durable and non-durable goods. The Diary sample interviews household for two consecutive weeks and includes detailed information about frequently purchased items (food, personal cares and household supplies). Each survey has its own questionnaire and sample. Information from one sample is exploited as an instrument for the other sample to derive a correction for the measurement error affecting observed measures of consumption inequality. Implications of our ?ndings are used as a test for the permanent income hypothesis.Consumption Inequality; Measurement Error; Permanent Income Hypothesis

    Characterization of Dwarf Novae Using SDSS Colors

    Get PDF
    We have developed a method for estimating the orbital periods of dwarf novae from the Sloan Digital Sky Survey (SDSS) colors in quiescence using an artificial neural network. For typical objects below the period gap with sufficient photometric accuracy, we were able to estimate the orbital periods with an accuracy to a 1 sigma error of 22 %. The error of estimation is worse for systems with longer orbital periods. We have also developed a neural-network-based method for categorical classification. This method has proven to be efficient in classifying objects into three categories (WZ Sge type, SU UMa type and SS Cyg/Z Cam type) and works for very faint objects to a limit of g=21. Using this method, we have investigated the distribution of the orbital periods of dwarf novae from a modern transient survey (Catalina Real-Time Survey). Using Bayesian analysis developed by Uemura et al. (2010, arXiv:1003.0945), we have found that the present sample tends to give a flatter distribution toward the shortest period and a shorter estimate of the period minimum, which may have resulted from the uncertainties in the neural network analysis and photometric errors. We also provide estimated orbital periods, estimated classifications and supplementary information on known dwarf novae with quiescent SDSS photometry.Comment: 70 pages, 7 figures, Accepted for publication in PASJ, minor correction
    corecore