874 research outputs found
Mean Field Voter Model of Election to the House of Representatives in Japan
In this study, we propose a mechanical model of a plurality election based on
a mean field voter model. We assume that there are three candidates in each
electoral district, i.e., one from the ruling party, one from the main
opposition party, and one from other political parties. The voters are
classified as fixed supporters and herding (floating) voters with ratios of
and , respectively. Fixed supporters make decisions based on their
information and herding voters make the same choice as another randomly
selected voter. The equilibrium vote-share probability density of herding
voters follows a Dirichlet distribution. We estimate the composition of fixed
supporters in each electoral district and using data from elections to the
House of Representatives in Japan (43rd to 47th). The spatial inhomogeneity of
fixed supporters explains the long-range spatial and temporal correlations. The
estimated values of are close to the estimates obtained from a survey.Comment: 11 pages, 7 figure
Modeling Human Ad Hoc Coordination
Whether in groups of humans or groups of computer agents, collaboration is
most effective between individuals who have the ability to coordinate on a
joint strategy for collective action. However, in general a rational actor will
only intend to coordinate if that actor believes the other group members have
the same intention. This circular dependence makes rational coordination
difficult in uncertain environments if communication between actors is
unreliable and no prior agreements have been made. An important normative
question with regard to coordination in these ad hoc settings is therefore how
one can come to believe that other actors will coordinate, and with regard to
systems involving humans, an important empirical question is how humans arrive
at these expectations. We introduce an exact algorithm for computing the
infinitely recursive hierarchy of graded beliefs required for rational
coordination in uncertain environments, and we introduce a novel mechanism for
multiagent coordination that uses it. Our algorithm is valid in any environment
with a finite state space, and extensions to certain countably infinite state
spaces are likely possible. We test our mechanism for multiagent coordination
as a model for human decisions in a simple coordination game using existing
experimental data. We then explore via simulations whether modeling humans in
this way may improve human-agent collaboration.Comment: AAAI 201
Stigmergy-based modeling to discover urban activity patterns from positioning data
Positioning data offer a remarkable source of information to analyze crowds
urban dynamics. However, discovering urban activity patterns from the emergent
behavior of crowds involves complex system modeling. An alternative approach is
to adopt computational techniques belonging to the emergent paradigm, which
enables self-organization of data and allows adaptive analysis. Specifically,
our approach is based on stigmergy. By using stigmergy each sample position is
associated with a digital pheromone deposit, which progressively evaporates and
aggregates with other deposits according to their spatiotemporal proximity.
Based on this principle, we exploit positioning data to identify high density
areas (hotspots) and characterize their activity over time. This
characterization allows the comparison of dynamics occurring in different days,
providing a similarity measure exploitable by clustering techniques. Thus, we
cluster days according to their activity behavior, discovering unexpected urban
activity patterns. As a case study, we analyze taxi traces in New York City
during 2015
Growth regulation of primary human keratinocytes by prostaglandin E receptor EP2 and EP3 subtypes
AbstractWe examined the contribution of specific EP receptors in regulating cell growth. By RT–PCR and northern hybridization, adult human keratinocytes express mRNA for three PGE2 receptor subtypes associated with cAMP signaling (EP2, EP3, and small amounts of EP4). In actively growing, non-confluent primary keratinocyte cultures, the EP2 and EP4 selective agonists, 11-deoxy PGE1 and 1-OH PGE1, caused complete reversal of indomethacin-induced growth inhibition. The EP3/EP2 agonist (misoprostol), and the EP1/EP2 agonist (17-phenyl trinor PGE2), showed less activity. Similar results were obtained with agonist-induced cAMP formation. The ability of exogenous dibutyryl cAMP to completely reverse indomethacin-induced growth inhibition support the conclusion that growth stimulation occurs via an EP2 and/or EP4 receptor-adenylyl cyclase coupled response. In contrast, activation of EP3 receptors by sulprostone, which is virtually devoid of agonist activity at EP2 or EP4 receptors, inhibited bromodeoxyuridine uptake in indomethacin-treated cells up to 30%. Although human EP3 receptor variants have been shown in other cell types to markedly inhibit cAMP formation via a pertussis toxin sensitive mechanism, EP3 receptor activation and presumably growth inhibition was independent of adenylyl cyclase, suggesting activation of other signaling pathways
VIENA2: A Driving Anticipation Dataset
Action anticipation is critical in scenarios where one needs to react before
the action is finalized. This is, for instance, the case in automated driving,
where a car needs to, e.g., avoid hitting pedestrians and respect traffic
lights. While solutions have been proposed to tackle subsets of the driving
anticipation tasks, by making use of diverse, task-specific sensors, there is
no single dataset or framework that addresses them all in a consistent manner.
In this paper, we therefore introduce a new, large-scale dataset, called
VIENA2, covering 5 generic driving scenarios, with a total of 25 distinct
action classes. It contains more than 15K full HD, 5s long videos acquired in
various driving conditions, weathers, daytimes and environments, complemented
with a common and realistic set of sensor measurements. This amounts to more
than 2.25M frames, each annotated with an action label, corresponding to 600
samples per action class. We discuss our data acquisition strategy and the
statistics of our dataset, and benchmark state-of-the-art action anticipation
techniques, including a new multi-modal LSTM architecture with an effective
loss function for action anticipation in driving scenarios.Comment: Accepted in ACCV 201
Stochastic parareal: an application of probabilistic methods to time-parallelisation
Parareal is a well-studied algorithm for numerically integrating systems of
time-dependent differential equations by parallelising the temporal domain.
Given approximate initial values at each temporal sub-interval, the algorithm
locates a solution in a fixed number of iterations using a predictor-corrector,
stopping once a tolerance is met. This iterative process combines solutions
located by inexpensive (coarse resolution) and expensive (fine resolution)
numerical integrators. In this paper, we introduce a stochastic parareal
algorithm with the aim of accelerating the convergence of the deterministic
parareal algorithm. Instead of providing the predictor-corrector with a
deterministically located set of initial values, the stochastic algorithm
samples initial values from dynamically varying probability distributions in
each temporal sub-interval. All samples are then propagated by the numerical
method in parallel. The initial values yielding the most continuous (smoothest)
trajectory across consecutive sub-intervals are chosen as the new, more
accurate, set of initial values. These values are fed into the
predictor-corrector, converging in fewer iterations than the deterministic
algorithm with a given probability. The performance of the stochastic
algorithm, implemented using various probability distributions, is illustrated
on systems of ordinary differential equations. When the number of sampled
initial values is large enough, we show that stochastic parareal converges
almost certainly in fewer iterations than the deterministic algorithm while
maintaining solution accuracy. Additionally, it is shown that the expected
value of the convergence rate decreases with increasing numbers of samples
Recommended from our members
Prevalence of presenting bilateral visual impairment associated with refractive error – findings from the See4School, pre-school vision screening programme in NHS Scotland
Background/objectives
The See4School programme in Scotland is a pre-school vision screening initiative delivered by orthoptists on a national scale. The primary objective of any vision screening programme is to identify amblyopia, given the common understanding that this condition is unlikely to be detected either at home or through conventional healthcare channels. The target condition is not bilateral visual impairment, as it is believed that most children will be identified within the first year of life either through observations at home or as part of the diagnosis of another related disorder. This belief persists even though bilateral visual impairment is likely to have a more detrimental impact on a child’s day-to-day life, including their education. If this hypothesis were accurate, the occurrence of bilateral visual impairment detected through the Scottish vision screening programme would be minimal as children already under the hospital eye service are not invited for testing. The overarching aim of this study was therefore to determine the prevalence of presenting bilateral visual impairment associated with refractive error detected via the Scottish preschool screening programme.
Subjects/methods
Retrospective anonymised data from vision screening referrals in Scotland from 2013–2016 were collected. Children underwent an assessment using a crowded logMAR vision test and a small number of orthoptic tests.
Results
During the 3-year period, out of 165,489 eligible children, 141,237 (85.35%) received the vision screening assessment. Among them, 27,010 (19.12%) failed at least one part of the screening and were subsequently referred into the diagnostic pathway, where they received a full sight test. The prevalence of bilateral visual impairment associated with refractive error and detected via the vision screening programme (≥ 0.3LogMAR) was reported to range between 1.47% (1.37–1.59) and 2.42% (2.29–2.57).
Conclusions
It is estimated that up to 2.42% (2.29–2.57) of children living Scotland have poorer than driving standard of vision (6/12) in their pre-school year, primarily due to undetected refractive error. Reduced vision has the potential to impact a child’s their day-to-day life including their future educational, health and social outcomes
Recommended from our members
Prevalence of Presenting Bilateral Visual Impairment (PBVI) associated with refractive error – Findings from the See4School, Pre-school Vision Screening Program in NHS Scotland
Introduction: The See4School programme in Scotland is a pre-school vision screening initiative delivered by orthoptists on a national scale. The primary objective of this programme is to identify common visual conditions such as refractive error, amblyopia, strabismus and binocular vision defects.
Methods: Retrospective anonymised data from vision Screening referrals in Scotland from 2013-2016 were collected. Children underwent an assessment using a crowded logMAR vision test and a small number of orthoptic tests.
Results: During the 3-year period, out of 165,489 eligible children, 141,237 (85.34%) received the vision screening assessment. Among them, 27,010 (19.12%) failed at least one part of the screening and were subsequently referred into the diagnostic pathway, where they received a full sight test. The prevalence of bilateral visual impairment (≥0.3LogMAR), ranged between 1.47% (1.37-1.59) and 2.42% (2.29-2.57).
Discussion: It is estimated that up to 2.42% (2.29-2.57) of children living Scotland have poorer than driving standard of vision (6/12) in their pre-school year. Reduced vision has the potential to impact a child’s their day-to-day life including their future educational, health and social outcomes
Unique in the shopping mall: On the reidentifiability of credit card metadata
Large-scale data sets of human behavior have the potential to fundamentally transform the way we fight diseases, design cities, or perform research. Metadata, however, contain sensitive information. Understanding the privacy of these data sets is key to their broad use and, ultimately, their impact. We study 3 months of credit card records for 1.1 million people and show that four spatiotemporal points are enough to uniquely reidentify 90% of individuals. We show that knowing the price of a transaction increases the risk of reidentification by 22%, on average. Finally, we show that even data sets that provide coarse information at any or all of the dimensions provide little anonymity and that women are more reidentifiable than men in credit card metadata.European Commission. Framework Programme 7 (Marie Curie Action. Grant 264994)U.S. Army Research Laboratory (Cooperative Agreement W911NF-09-2-0053)Belgian American Educational Foundation, inc.Wallonie-Bruxelles Internationa
GParareal: A time-parallel ODE solver using Gaussian process emulation
Sequential numerical methods for integrating initial value problems (IVPs)
can be prohibitively expensive when high numerical accuracy is required over
the entire interval of integration. One remedy is to integrate in a parallel
fashion, "predicting" the solution serially using a cheap (coarse) solver and
"correcting" these values using an expensive (fine) solver that runs in
parallel on a number of temporal subintervals. In this work, we propose a
time-parallel algorithm (GParareal) that solves IVPs by modelling the
correction term, i.e. the difference between fine and coarse solutions, using a
Gaussian process emulator. This approach compares favourably with the classic
parareal algorithm and we demonstrate, on a number of IVPs, that GParareal can
converge in fewer iterations than parareal, leading to an increase in parallel
speed-up. GParareal also manages to locate solutions to certain IVPs where
parareal fails and has the additional advantage of being able to use archives
of legacy solutions, e.g. solutions from prior runs of the IVP for different
initial conditions, to further accelerate convergence of the method --
something that existing time-parallel methods do not do
- …