8,642 research outputs found
Recommended from our members
Using the EQ-5D as a performance measurement tool in the NHS
In a landmark move, the UK Department of Health (DH) is introducing the routine use of Patient Reported Outcome Measures (PROMs) as a means of measuring the performance of health care providers in improving patient health. From April 2009 all patients will be asked to complete both generic (EQ-5D) and condition specific PROMs before and after surgery for four elective procedures; the intention is to extend this to a wide range of other NHS services. The aim of this paper is to report analysis of the EQ-5D data generated from a pilot study commissioned by the DH, and to consider the implications of the results for their use as performance indicators and measures of patient benefit. The EQ-5D has the potential advantage in the context of PROMs of enabling comparisons of performance across services as well as between providers; and in facilitating assessments of the cost effectiveness of NHS services. We present two new methods we have developed for analysing and displaying EQ-5D profile data: a Paretian Classification of Health Change, and a Health Profile Grid. Using these methods, we show that EQ-5D data can readily be used to generate useful insights into differences between providers in improving overall changes in health; results are also suggestive of striking differences in changes in health between surgical procedures. We conclude by noting a number of issues that remain to be addressed in the use of PROMs data as a basis for performance indicators
Improving efficiency in radio surveys for gravitational lenses
Many lens surveys have hitherto used observations of large samples of
background sources to select the small minority which are multiply imaged by
lensing galaxies along the line of sight. Recently surveys such as SLACS and
OLS have improved the efficiency of surveys by pre-selecting double-redshift
systems from SDSS. We explore other ways to improve survey efficiency by
optimum use of astrometric and morphological information in existing
large-scale optical and radio surveys. The method exploits the small position
differences between FIRST radio positions of lensed images and the SDSS lens
galaxy positions, together with the marginal resolution of some larger
gravitational lens systems by the FIRST beam. We present results of a small
pilot study with the VLA and MERLIN, and discuss the desirable criteria for
future surveys.Comment: Accepted by MNRAS. 9 pages, 5 figure
Finding Optimal Flows Efficiently
Among the models of quantum computation, the One-way Quantum Computer is one
of the most promising proposals of physical realization, and opens new
perspectives for parallelization by taking advantage of quantum entanglement.
Since a one-way quantum computation is based on quantum measurement, which is a
fundamentally nondeterministic evolution, a sufficient condition of global
determinism has been introduced as the existence of a causal flow in a graph
that underlies the computation. A O(n^3)-algorithm has been introduced for
finding such a causal flow when the numbers of output and input vertices in the
graph are equal, otherwise no polynomial time algorithm was known for deciding
whether a graph has a causal flow or not. Our main contribution is to introduce
a O(n^2)-algorithm for finding a causal flow, if any, whatever the numbers of
input and output vertices are. This answers the open question stated by Danos
and Kashefi and by de Beaudrap. Moreover, we prove that our algorithm produces
an optimal flow (flow of minimal depth.)
Whereas the existence of a causal flow is a sufficient condition for
determinism, it is not a necessary condition. A weaker version of the causal
flow, called gflow (generalized flow) has been introduced and has been proved
to be a necessary and sufficient condition for a family of deterministic
computations. Moreover the depth of the quantum computation is upper bounded by
the depth of the gflow. However, the existence of a polynomial time algorithm
that finds a gflow has been stated as an open question. In this paper we answer
this positively with a polynomial time algorithm that outputs an optimal gflow
of a given graph and thus finds an optimal correction strategy to the
nondeterministic evolution due to measurements.Comment: 10 pages, 3 figure
NICMOS and VLBA observations of the gravitational lens system B1933+503
NICMOS observations of the complex gravitational lens system B1933+503 reveal
infrared counterparts to two of the inverted spectrum radio images. The
infrared images have arc-like structures. The corresponding radio images are
also detected in a VLBA map made at 1.7 GHz with a resolution of 6 mas. We fail
to detect two of the four inverted radio spectrum components with the VLBA even
though they are clearly visible in a MERLIN map at the same frequency at a
different epoch. The absence of these two components could be due to rapid
variability on a time-scale less than the time delay, or to broadening of the
images during propagation of the radio waves through the ISM of the lensing
galaxy to an extent that they fall below the surface brightness detectability
threshold of the VLBA observations. The failure to detect the same two images
with NICMOS is probably due to extinction in the ISM of the lensing galaxy.Comment: 5 pages, 4 figures, submitted to MNRA
Biasing MCTS with Features for General Games
This paper proposes using a linear function approximator, rather than a deep
neural network (DNN), to bias a Monte Carlo tree search (MCTS) player for
general games. This is unlikely to match the potential raw playing strength of
DNNs, but has advantages in terms of generality, interpretability and resources
(time and hardware) required for training. Features describing local patterns
are used as inputs. The features are formulated in such a way that they are
easily interpretable and applicable to a wide range of general games, and might
encode simple local strategies. We gradually create new features during the
same self-play training process used to learn feature weights. We evaluate the
playing strength of an MCTS player biased by learnt features against a standard
upper confidence bounds for trees (UCT) player in multiple different board
games, and demonstrate significantly improved playing strength in the majority
of them after a small number of self-play training games.Comment: Accepted at IEEE CEC 2019, Special Session on Games. Copyright of
final version held by IEE
The Serendiptichord: Reflections on the collaborative design process between artist and researcher
The Serendiptichord is a wearable instrument, resulting from a collaboration crossing fashion, technology, music and dance. This paper reflects on the collaborative process and how defining both creative and research roles for each party led to a successful creative partnership built on mutual respect and open communication. After a brief snapshot of the instrument in performance, the instrument is considered within the context of dance-driven interactive music systems followed by a discussion on the nature of the collaboration and its impact upon the design process and final piece
Learning Policies from Self-Play with Policy Gradients and MCTS Value Estimates
In recent years, state-of-the-art game-playing agents often involve policies
that are trained in self-playing processes where Monte Carlo tree search (MCTS)
algorithms and trained policies iteratively improve each other. The strongest
results have been obtained when policies are trained to mimic the search
behaviour of MCTS by minimising a cross-entropy loss. Because MCTS, by design,
includes an element of exploration, policies trained in this manner are also
likely to exhibit a similar extent of exploration. In this paper, we are
interested in learning policies for a project with future goals including the
extraction of interpretable strategies, rather than state-of-the-art
game-playing performance. For these goals, we argue that such an extent of
exploration is undesirable, and we propose a novel objective function for
training policies that are not exploratory. We derive a policy gradient
expression for maximising this objective function, which can be estimated using
MCTS value estimates, rather than MCTS visit counts. We empirically evaluate
various properties of resulting policies, in a variety of board games.Comment: Accepted at the IEEE Conference on Games (CoG) 201
You Can\u27t Just Pick One Race and You Shouldn\u27t Have To: Analyzing Biracial Protagonists in Young Adult Fiction
In recent years, Children’s Literature has seen a notable increase of texts involving racial diversity. One area that does not garner as much attention is Young Adult fiction with biracial protagonists. With the growing number of families in the United States identifying as biracial or multi-racial, it is important to examine the representations of biracial characters offered to readers. Poston has noted that biracial adolescents often experience crisis and alienation, as they are forced to choose an identity that does not fully encompass their complex racial background (Nuttgens 2010). Sarah Jamila Stevenson’s The Latte Rebellion (2011), Joan Steinau Lester’s Black, White, Other (2011), and Sandra Forrester’s Dust from Old Bones (1999) work against the idea that people should have to choose a racial identity that alienates them from a part of themselves, arguing instead that the world is not divided into stark identity categories, though recognizing racial differences remains of the utmost importance. The characters in these novels are able to navigate the channels of being biracial, while developing a sense of what Lourdes India Ivory calls “biracial competency” and “biracial efficacy,” allowing them to function successfully within both racial groups (Ivory 2010). This research analyzes authorial depictions of biracial characters, and the effect these depictions have on character identity development throughout the novels
Can guidelines improve referral to elective surgical specialties for adults? A systematic review
Aim To assess effectiveness of guidelines for referral for
elective surgical assessment.
Method Systematic review with descriptive synthesis.
Data sources Medline, EMBASE, CINAHL and Cochrane
database up to 2008. Hand searches of journals and
websites.
Selection of studies Studies evaluated guidelines for
referral from primary to secondary care, for elective
surgical assessment for adults.
Outcome measures Appropriateness of referral (usually
measured as guideline compliance) including clinical
appropriateness, appropriateness of destination and of
pre-referral management (eg, diagnostic investigations),
general practitioner knowledge of referral
appropriateness, referral rates, health outcomes and
costs.
Results 24 eligible studies (5 randomised control trials,
6 cohort, 13 case series) included guidelines from UK,
Europe, Canada and the USA for referral for
musculoskeletal, urological, ENT, gynaecology, general
surgical and ophthalmological conditions. Interventions
varied from complex (“one-stop shops”) to simple
guidelines. Four randomized control trials reported
increases in appropriateness of pre-referral care
(diagnostic investigations and treatment). No evidence
was found for effects on practitioner knowledge. Mixed
evidence was reported on rates of referral and costs
(rates and costs increased, decreased or stayed the
same). Two studies reported on health outcomes finding
no change.
Conclusions Guidelines for elective surgical referral can
improve appropriateness of care by improving prereferral
investigation and treatment, but there is no
strong evidence in favour of other beneficial effects
- …