21 research outputs found
Response to Professor Olsen's Commentary on âRandom Sampling â Is It Worth It?â
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/94518/1/ppe12019.pd
Implementing Providerâbased Sampling for the National Children's Study: Opportunities and Challenges
Background:â The National Children's Study (NCS) was established as a national probability sample of births to prospectively study children's health starting from in utero to age 21. The primary sampling unit was 105 study locations (typically a county). The secondary sampling unit was the geographic unit (segment), but this was subsequently perceived to be an inefficient strategy. Methods and Results:â This paper proposes that secondâstage sampling using prenatal care providers is an efficient and costâeffective method for deriving a national probability sample of births in the US. It offers a rationale for providerâbased sampling and discusses a number of strategies for assembling a sampling frame of providers. Also presented are special challenges to providerâbased sampling pregnancies, including optimising key sample parameters, retaining geographic diversity, determining the types of providers to include in the sample frame, recruiting women who do not receive prenatal care, and using community engagement to enrol women. There will also be substantial operational challenges to sampling provider groups. Conclusion:â We argue that probability sampling is mandatory to capture the full variation in exposure and outcomes expected in a national cohort study, to provide valid and generalisable risk estimates, and to accurately estimate policy (such as screening) benefits from associations reported in the NCS.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/94504/1/ppe12005.pd
How Well Does Microsolvation Represent Macrosolvation? A Test Case:Â Dynamics of Decarboxylation of 4-Pyridylacetic Acid Zwitterion
New Models for Large Prospective Studies: Is There a Risk of Throwing Out the Baby With the Bathwater?
Manolio et al. (Am J Epidemiol. 2012;175:859-866) proposed that large cohort studies adopt novel models using temporary assessment centers to enroll up to a million participants to answer research questions about rare diseases and harmonize clinical endpoints collected from administrative records. Extreme selection bias, we are told, will not harm internal validity, and process expertise to maximize efficiency of high-throughput operations is as important as scientific rigor (p. 861). In this article, we describe serious deficiencies in this model as applied to the United States. Key points include: 1) the need for more, not less, specification of disease endpoints; 2) the limited utility of data collected from existing administrative and clinical databases; and 3) the value of university-based centers in providing scientific expertise and achieving high recruitment and retention rates through community and healthcare provider engagement. Careful definition of sampling frames and high response rates are crucial to avoid bias and ensure inclusion of important subpopulations, especially the medically underserved. Prospective hypotheses are essential to refine study design, determine sample size, develop pertinent data collection protocols, and achieve alliances with participants and communities. It is premature to reject the strengths of large national cohort studies in favor of a new model for which evidence of efficiency is insufficient