1,761 research outputs found
A Deep Reinforcement Learning Approach to Rare Event Estimation
An important step in the design of autonomous systems is to evaluate the
probability that a failure will occur. In safety-critical domains, the failure
probability is extremely small so that the evaluation of a policy through Monte
Carlo sampling is inefficient. Adaptive importance sampling approaches have
been developed for rare event estimation but do not scale well to sequential
systems with long horizons. In this work, we develop two adaptive importance
sampling algorithms that can efficiently estimate the probability of rare
events for sequential decision making systems. The basis for these algorithms
is the minimization of the Kullback-Leibler divergence between a
state-dependent proposal distribution and a target distribution over
trajectories, but the resulting algorithms resemble policy gradient and
value-based reinforcement learning. We apply multiple importance sampling to
reduce the variance of our estimate and to address the issue of multi-modality
in the optimal proposal distribution. We demonstrate our approach on a control
task with both continuous and discrete actions spaces and show accuracy
improvements over several baselines
A scoping review protocol for the use and analysis of patient-reported outcome measures in randomised controlled trials of the prostate:Version 1.0 (12.07.21)
Credibility perceptions of content contributors and consumers in social media
This panel addresses information credibility issues in the context of social media. During this panel, participants will discuss people's credibility perceptions of online content in social media from the perspectives of both content contributors and consumers. Each panelist will bring her own perspective on credibility issues in various social media, including Twitter (Morris), Wikipedia (Metzger; Francke), blogs (Rieh), and social Q&A (Jeon). This panel aims to flesh out multi‐disciplinary approaches to the investigation of credibility and discuss integrated conceptual frameworks and future research directions focusing on assessing and establishing credibility in social media.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/111174/1/meet14505101022.pd
Statistical and Health Economic analysis plan for the CHolinesterase Inhibitor to prEvent Falls in Parkinson’s Disease (CHIEF-PD) study:Version 1.0 (23.03.23)
Prostate Surgery for Men with Lower Urinary Tract Symptoms: Do We Need Urodynamics to Find the Right Candidates?:Exploratory Findings from the UPSTREAM Trial
BackgroundIdentifying men whose lower urinary tract symptoms (LUTS) may benefit from surgery is challenging.ObjectiveTo identify routine diagnostic and urodynamic measures associated with treatment decision-making, and outcome, in exploratory analyses of the UPSTREAM trial.Design, setting, and participantsA randomised controlled trial was conducted including 820 men, considering surgery for LUTS, across 26 hospitals in England (ISCTRN56164274).InterventionMen were randomised to a routine care (RC) diagnostic pathway (n = 393) or a pathway that included urodynamics (UDS) in addition to RC (n = 427).Outcome measurements and statistical analysisMen underwent uroflowmetry and completed symptom questionnaires, at baseline and 18 mo after randomisation. Regression models identified baseline clinical and symptom measures that predicted recommendation for surgery and/or surgical outcome (measured by the International Prostate Symptom Score [IPSS]). We explored the association between UDS and surgical outcome in subgroups defined by routine measures.Results and limitationsThe recommendation for surgery could be predicted successfully in the RC and UDS groups (area under the receiver operating characteristic curve 0.78), with maximum flow rate (Qmax) and age predictors in both groups. Surgery was more beneficial in those with higher symptom scores (eg, IPSS >16), age 47.6, and bladder contractility index >123.0. In the UDS group, urodynamic measures were more strongly predictive of surgical outcome for those with Qmax >15, although patient-reported outcomes were also more predictive in this subgroup.ConclusionsTreatment decisions were informed with UDS, when available, but without evidence of change in the decisions reached. Despite the small group sizes, exploratory analyses suggest that selective use of UDS could detect obstructive pathology, missed by routine measures, in certain subgroups.Patient summaryBaseline clinical and symptom measurements were able to predict treatment decisions. The addition of urodynamic test results, while useful, did not generally lead to better surgical decisions and outcomes over routine tests alone
Monitoring Observations of the Jupiter-Family Comet 17P/Holmes during 2014 Perihelion Passage
We performed a monitoring observation of a Jupiter-Family comet, 17P/Holmes,
during its 2014 perihelion passage to investigate its secular change in
activity. The comet has drawn the attention of astronomers since its historic
outburst in 2007, and this occasion was its first perihelion passage since
then. We analyzed the obtained data using aperture photometry package and
derived the Afrho parameter, a proxy for the dust production rate. We found
that Afrho showed asymmetric properties with respect to the perihelion passage:
it increased moderately from 100 cm at the heliocentric distance r_h=2.6-3.1 AU
to a maximal value of 185 cm at r_h = 2.2 AU (near the perihelion) during the
inbound orbit, while dropping rapidly to 35 cm at r_h = 3.2 AU during the
outbound orbit. We applied a model for characterizing dust production rates as
a function of r_h and found that the fractional active area of the cometary
nucleus had dropped from 20%-40% in 2008-2011 (around the aphelion) to
0.1%-0.3% in 2014-2015 (around the perihelion). This result suggests that a
dust mantle would have developed rapidly in only one orbital revolution around
the sun. Although a minor eruption was observed on UT 2015 January 26 at r_h =
3.0 AU, the areas excavated by the 2007 outburst would be covered with a layer
of dust (<~ 10 cm depth) which would be enough to insulate the subsurface ice
and to keep the nucleus in a state of low activity.Comment: 25 pages, 6 figures, 2 tables, ApJ accepted on December 29, 201
Two-Qubit Gate Set Tomography with Fewer Circuits
Gate set tomography (GST) is a self-consistent and highly accurate method for
the tomographic reconstruction of a quantum information processor's quantum
logic operations, including gates, state preparations, and measurements.
However, GST's experimental cost grows exponentially with qubit number. For
characterizing even just two qubits, a standard GST experiment may have tens of
thousands of circuits, making it prohibitively expensive for platforms. We show
that, because GST experiments are massively overcomplete, many circuits can be
discarded. This dramatically reduces GST's experimental cost while still
maintaining GST's Heisenberg-like scaling in accuracy. We show how to exploit
the structure of GST circuits to determine which ones are superfluous. We
confirm the efficacy of the resulting experiment designs both through numerical
simulations and via the Fisher information for said designs. We also explore
the impact of these techniques on the prospects of three-qubit GST.Comment: 46 pages, 13 figures. V2: Minor edits to acknowledgment
Spurious Shear in Weak Lensing with LSST
The complete 10-year survey from the Large Synoptic Survey Telescope (LSST)
will image 20,000 square degrees of sky in six filter bands every few
nights, bringing the final survey depth to , with over 4 billion
well measured galaxies. To take full advantage of this unprecedented
statistical power, the systematic errors associated with weak lensing
measurements need to be controlled to a level similar to the statistical
errors.
This work is the first attempt to quantitatively estimate the absolute level
and statistical properties of the systematic errors on weak lensing shear
measurements due to the most important physical effects in the LSST system via
high fidelity ray-tracing simulations. We identify and isolate the different
sources of algorithm-independent, \textit{additive} systematic errors on shear
measurements for LSST and predict their impact on the final cosmic shear
measurements using conventional weak lensing analysis techniques. We find that
the main source of the errors comes from an inability to adequately
characterise the atmospheric point spread function (PSF) due to its high
frequency spatial variation on angular scales smaller than in the
single short exposures, which propagates into a spurious shear correlation
function at the -- level on these scales. With the large
multi-epoch dataset that will be acquired by LSST, the stochastic errors
average out, bringing the final spurious shear correlation function to a level
very close to the statistical errors. Our results imply that the cosmological
constraints from LSST will not be severely limited by these
algorithm-independent, additive systematic effects.Comment: 22 pages, 12 figures, accepted by MNRA
- …