14,604 research outputs found
Error-free milestones in error prone measurements
A predictor variable or dose that is measured with substantial error may
possess an error-free milestone, such that it is known with negligible error
whether the value of the variable is to the left or right of the milestone.
Such a milestone provides a basis for estimating a linear relationship between
the true but unknown value of the error-free predictor and an outcome, because
the milestone creates a strong and valid instrumental variable. The inferences
are nonparametric and robust, and in the simplest cases, they are exact and
distribution free. We also consider multiple milestones for a single predictor
and milestones for several predictors whose partial slopes are estimated
simultaneously. Examples are drawn from the Wisconsin Longitudinal Study, in
which a BA degree acts as a milestone for sixteen years of education, and the
binary indicator of military service acts as a milestone for years of service.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS233 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Isolation in the construction of natural experiments
A natural experiment is a type of observational study in which treatment
assignment, though not randomized by the investigator, is plausibly close to
random. A process that assigns treatments in a highly nonrandom, inequitable
manner may, in rare and brief moments, assign aspects of treatments at random
or nearly so. Isolating those moments and aspects may extract a natural
experiment from a setting in which treatment assignment is otherwise quite
biased, far from random. Isolation is a tool that focuses on those rare, brief
instances, extracting a small natural experiment from otherwise useless data.
We discuss the theory behind isolation and illustrate its use in a reanalysis
of a well-known study of the effects of fertility on workforce participation.
Whether a woman becomes pregnant at a certain moment in her life and whether
she brings that pregnancy to term may reflect her aspirations for family,
education and career, the degree of control she exerts over her fertility, and
the quality of her relationship with the father; moreover, these aspirations
and relationships are unlikely to be recorded with precision in surveys and
censuses, and they may confound studies of workforce participation. However,
given that a women is pregnant and will bring the pregnancy to term, whether
she will have twins or a single child is, to a large extent, simply luck. Given
that a woman is pregnant at a certain moment, the differential comparison of
two types of pregnancies on workforce participation, twins or a single child,
may be close to randomized, not biased by unmeasured aspirations. In this
comparison, we find in our case study that mothers of twins had more children
but only slightly reduced workforce participation, approximately 5% less time
at work for an additional child.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS770 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Cross-screening in observational studies that test many hypotheses
We discuss observational studies that test many causal hypotheses, either
hypotheses about many outcomes or many treatments. To be credible an
observational study that tests many causal hypotheses must demonstrate that its
conclusions are neither artifacts of multiple testing nor of small biases from
nonrandom treatment assignment. In a sense that needs to be defined carefully,
hidden within a sensitivity analysis for nonrandom assignment is an enormous
correction for multiple testing: in the absence of bias, it is extremely
improbable that multiple testing alone would create an association insensitive
to moderate biases. We propose a new strategy called "cross-screening",
different from but motivated by recent work of Bogomolov and Heller on
replicability. Cross-screening splits the data in half at random, uses the
first half to plan a study carried out on the second half, then uses the second
half to plan a study carried out on the first half, and reports the more
favorable conclusions of the two studies correcting using the Bonferroni
inequality for having done two studies. If the two studies happen to concur,
then they achieve Bogomolov-Heller replicability; however, importantly,
replicability is not required for strong control of the family-wise error rate,
and either study alone suffices for firm conclusions. In randomized studies
with a few hypotheses, cross-split screening is not an attractive method when
compared with conventional methods of multiplicity control, but it can become
attractive when hundreds or thousands of hypotheses are subjected to
sensitivity analyses in an observational study. We illustrate the technique by
comparing 46 biomarkers in individuals who consume large quantities of fish
versus little or no fish.Comment: 33 pages, 2 figures, 5 table
Standardized field testing of assistant robots in a Mars-like environment
Controlled testing on standard tasks and within standard environments can provide meaningful performance comparisons between robots of heterogeneous design. But because they must perform practical tasks in unstructured, and therefore non-standard, environments, the benefits of this approach have barely begun to accrue for field robots. This work describes a desert trial of six student prototypes of astronaut-support robots using a set of standardized engineering tests developed by the US National Institute of Standards and Technology (NIST), along with three operational tests in natural Mars-like terrain. The results suggest that standards developed for emergency response robots are also applicable to the astronaut support domain, yielding useful insights into the differences in capabilities between robots and real design improvements. The exercise shows the value of combining repeatable engineering tests with task-specific application-testing in the field
Stronger instruments via integer programming in an observational study of late preterm birth outcomes
In an optimal nonbipartite match, a single population is divided into matched
pairs to minimize a total distance within matched pairs. Nonbipartite matching
has been used to strengthen instrumental variables in observational studies of
treatment effects, essentially by forming pairs that are similar in terms of
covariates but very different in the strength of encouragement to accept the
treatment. Optimal nonbipartite matching is typically done using network
optimization techniques that can be quick, running in polynomial time, but
these techniques limit the tools available for matching. Instead, we use
integer programming techniques, thereby obtaining a wealth of new tools not
previously available for nonbipartite matching, including fine and near-fine
balance for several nominal variables, forced near balance on means and optimal
subsetting. We illustrate the methods in our on-going study of outcomes of
late-preterm births in California, that is, births of 34 to 36 weeks of
gestation. Would lengthening the time in the hospital for such births reduce
the frequency of rapid readmissions? A straightforward comparison of babies who
stay for a shorter or longer time would be severely biased, because the
principal reason for a long stay is some serious health problem. We need an
instrument, something inconsequential and haphazard that encourages a shorter
or a longer stay in the hospital. It turns out that babies born at certain
times of day tend to stay overnight once with a shorter length of stay, whereas
babies born at other times of day tend to stay overnight twice with a longer
length of stay, and there is nothing particularly special about a baby who is
born at 11:00 pm.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS582 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
ELEMENTARY TEACHERS’ PERCEPTION OF DIGITAL RESOURCES BASED ON STUDENT ACHIEVEMENT
Many factors impact teachers’ decisions about when and how to implement technology during instruction. However, a gap exists in understanding teachers’ motivations for technology integration and face-to-face instruction. Therefore, this qualitative case study explored how teachers’ perceptions of student achievement, motivation, classroom behaviors, and digital challenges influenced their decisions about using technology or direct instruction in the classroom setting. A group of 20 teachers from two southern Florida public elementary schools completed anonymous Likert-scale surveys; six teachers participated in semi-structured interviews. The findings determined via descriptive statistics and thematic analysis revealed that teachers’ inclusion of technology and traditional resources is influenced by teachers’ perceptions of students’ achievement, motivation, behavior, and technology challenges during instruction. To increase technology inclusion, teachers stressed the importance of a balanced and ethical learning experience that promotes students’ achievement. Participants indicated that to increase teachers’ technology inclusion, greater focus must be placed on resources that enhance students’ learning and achievement rather than focusing on student motivation, behavior, and technology challenges
Greater Sage-Grouse and Community Responses to Strategies to Mitigate Environmental Resistance in an Anthropogenic Altered Sagebrush Landscape
Sagebrush (Artemisia spp.) ecosystems are diverse habitats found throughout western North America. Anthropogenic disturbances has resulted in the loss of over half of the sagebrush ecosystems impacting sagebrush obligate species such as sage-grouse (Centrocercus spp.). Federal, state, and private land managers have implemented landscape scale mechanical pinyon (Pinus spp.) and juniper (Juniperus spp.; conifer) removal projects in an effort to restore functioning sagebrush communities to benefit sage-grouse. However, few studies have investigated the potential for using large-scale conifer treatments to mitigate factors impeding sage-grouse seasonal movements and space-use in anthropogenic altered landscapes.
To address this management need, I analyzed pre- and post-treatment vegetation composition data and annual changes in percent cover for known conifer treatments completed from 2008-2014 in Box Elder County, Utah, USA. I developed a multivariate generalized linear regression model that predicts future landscape conditions for sage-grouse and projects tree canopy cover that approximated observed cover values for known treated plots at time of treatment and five years post-treatment.
Next, I analyzed five different management scenarios to predict resource selection by greater sage-grouse (Centrocercus urophasianus) in response to changes in habitat following conifer treatments. I used a Relative Selection Strength (RSS) framework to quantify the net habitat gain from 2017 to 2023. My top ranked treatment scenario showed net habitat gains across all categories.
Additionally, I investigated the efficacy of global position system (GPS) and very high frequency (VHF) transmitters used in range wide studies. I compared mortality rates for two separate Utah populations. Across summer and winter for sex, and spring, summer and winter for age, I documented higher mortality for sage-grouse marked with GPS transmitters.
Lastly, to assess stakeholders’ perceptions of contemporary community-based conservation efforts, I conducted a case study in fall 2019 of the West Box Elder Coordinated Resource Management (CRM). Respondents reported: participation by federal and state agencies was paramount for funding and program structure, trust has been enhance, and landowner involvement is necessary for long-term stability and persistence
- …