2,904 research outputs found
Crowdsourcing accurately and robustly predicts Supreme Court decisions
Scholars have increasingly investigated "crowdsourcing" as an alternative to
expert-based judgment or purely data-driven approaches to predicting the
future. Under certain conditions, scholars have found that crowdsourcing can
outperform these other approaches. However, despite interest in the topic and a
series of successful use cases, relatively few studies have applied empirical
model thinking to evaluate the accuracy and robustness of crowdsourcing in
real-world contexts. In this paper, we offer three novel contributions. First,
we explore a dataset of over 600,000 predictions from over 7,000 participants
in a multi-year tournament to predict the decisions of the Supreme Court of the
United States. Second, we develop a comprehensive crowd construction framework
that allows for the formal description and application of crowdsourcing to
real-world data. Third, we apply this framework to our data to construct more
than 275,000 crowd models. We find that in out-of-sample historical
simulations, crowdsourcing robustly outperforms the commonly-accepted null
model, yielding the highest-known performance for this context at 80.8% case
level accuracy. To our knowledge, this dataset and analysis represent one of
the largest explorations of recurring human prediction to date, and our results
provide additional empirical support for the use of crowdsourcing as a
prediction method.Comment: 11 pages, 5 figures, 4 tables; preprint for public feedbac
A General Approach for Predicting the Behavior of the Supreme Court of the United States
Building on developments in machine learning and prior work in the science of
judicial prediction, we construct a model designed to predict the behavior of
the Supreme Court of the United States in a generalized, out-of-sample context.
To do so, we develop a time evolving random forest classifier which leverages
some unique feature engineering to predict more than 240,000 justice votes and
28,000 cases outcomes over nearly two centuries (1816-2015). Using only data
available prior to decision, our model outperforms null (baseline) models at
both the justice and case level under both parametric and non-parametric tests.
Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level
and 71.9% at the justice vote level. More recently, over the past century, we
outperform an in-sample optimized null model by nearly 5%. Our performance is
consistent with, and improves on the general level of prediction demonstrated
by prior work; however, our model is distinctive because it can be applied
out-of-sample to the entire past and future of the Court, not a single term.
Our results represent an important advance for the science of quantitative
legal prediction and portend a range of other potential applications.Comment: version 2.02; 18 pages, 5 figures. This paper is related to but
distinct from arXiv:1407.6333, and the results herein supersede
arXiv:1407.6333. Source code available at
https://github.com/mjbommar/scotus-predict-v
The Brightening of Re50N: Accretion Event or Dust Clearing?
The luminous Class I protostar HBC 494, embedded in the Orion A cloud, is
associated with a pair of reflection nebulae, Re50 and Re50N, which appeared
sometime between 1955 and 1979. We have found that a dramatic brightening of
Re50N has taken place sometime between 2006 and 2014. This could result if the
embedded source is undergoing a FUor eruption. However, the near-infrared
spectrum shows a featureless very red continuum, in contrast to the strong CO
bandhead absorption displayed by FUors. Such heavy veiling, and the high
luminosity of the protostar, is indicative of strong accretion but seemingly
not in the manner of typical FUors. We favor the alternative explanation that
the major brightening of Re50N and the simultaneous fading of Re50 is caused by
curtains of obscuring material that cast patterns of illumination and shadows
across the surface of the molecular cloud. This is likely occurring as an
outflow cavity surrounding the embedded protostar breaks through to the surface
of the molecular cloud. Several Herbig-Haro objects are found in the region.Comment: 8 pages, accepted by Ap
A test of the risk allocation hypothesis: tadpole responses to temporal change in predation risk
The risk allocation hypothesis predicts that temporal variation in predation risk can influence how animals allocate feeding behavior among situations that differ in danger. We tested the risk allocation model with tadpoles of the frog Rana lessonae, which satisfy the main assumptions of this model because they must feed to reach metamorphosis within a single season, their behavioral defense against predators is costly, and they can respond to changes in risk integrated over time. Our experiment switched tadpoles between artificial ponds with different numbers of caged dragonfly larvae and held them at high and low risk for different portions of their lives. Tadpoles responded strongly to predators, but they did not obey the risk allocation hypothesis: as the high-risk environment became more dangerous, there was no tendency for tadpoles to allocate more feeding to the low-risk environment, and as tadpoles spent more time at risk, they did not increase feeding in both environments. Our results suggest that the model might be more applicable when the time spent under high predation risk is large relative to the time required to collect resource
Real-Time Adherence Monitoring for HIV Antiretroviral Therapy
Current adherence assessments typically detect missed doses long after they occur. Real-time, wireless monitoring strategies for antiretroviral therapy may provide novel opportunities to proactively prevent virologic rebound and treatment failure. Wisepill, a wireless pill container that transmits a cellular signal when opened, was pilot tested in ten Ugandan individuals for 6 months. Adherence levels measured by Wisepill, unannounced pill counts, and self-report were compared with each other, prior standard electronic monitoring, and HIV RNA. Wisepill data was initially limited by battery life and signal transmission interruptions. Following device improvements, continuous data was achieved with median (interquartile range) adherence levels of 93% (87–97%) by Wisepill, 100% (99–100%) by unannounced pill count, 100% (100–100%) by self-report, and 92% (79–98%) by prior standard electronic monitoring. Four individuals developed transient, low-level viremia. After overcoming technical challenges, real-time adherence monitoring is feasible for resource-limited settings and may detect suboptimal adherence prior to viral rebound
Nanoscale structuring of tungsten tip yields most coherent electron point-source
This report demonstrates the most spatially-coherent electron source ever
reported. A coherence angle of 14.3 +/- 0.5 degrees was measured, indicating a
virtual source size of 1.7 +/-0.6 Angstrom using an extraction voltage of 89.5
V. The nanotips under study were crafted using a spatially-confined,
field-assisted nitrogen etch which removes material from the periphery of the
tip apex resulting in a sharp, tungsten-nitride stabilized, high-aspect ratio
source. The coherence properties are deduced from holographic measurements in a
low-energy electron point source microscope with a carbon nanotube bundle as
sample. Using the virtual source size and emission current the brightness
normalized to 100 kV is found to be 7.9x10^8 A/sr cm^2
Just how difficult can it be counting up R&D funding for emerging technologies (and is tech mining with proxy measures going to be any better?)
Decision makers considering policy or strategy related to the development of emerging technologies expect high quality data on the support for different technological options. A natural starting point would be R&D funding statistics. This paper explores the limitations of such aggregated data in relation to the substance and quantification of funding for emerging technologies.
Using biotechnology as an illustrative case, we test the utility of a novel taxonomy to demonstrate the endemic weaknesses in the availability and quality of data from public and private sources. Using the same taxonomy, we consider the extent to which tech-mining presents an alternative, or potentially complementary, way to determine support for emerging technologies using proxy measures such as patents and scientific publications
- …
