2,039 research outputs found
Recommended from our members
Wikipedia Usage Estimates Prevalence of Influenza-Like Illness in the United States in Near Real-Time
Circulating levels of both seasonal and pandemic influenza require constant surveillance to ensure the health and safety of the population. While up-to-date information is critical, traditional surveillance systems can have data availability lags of up to two weeks. We introduce a novel method of estimating, in near-real time, the level of influenza-like illness (ILI) in the United States (US) by monitoring the rate of particular Wikipedia article views on a daily basis. We calculated the number of times certain influenza- or health-related Wikipedia articles were accessed each day between December 2007 and August 2013 and compared these data to official ILI activity levels provided by the Centers for Disease Control and Prevention (CDC). We developed a Poisson model that accurately estimates the level of ILI activity in the American population, up to two weeks ahead of the CDC, with an absolute average difference between the two estimates of just 0.27% over 294 weeks of data. Wikipedia-derived ILI models performed well through both abnormally high media coverage events (such as during the 2009 H1N1 pandemic) as well as unusually severe influenza seasons (such as the 2012–2013 influenza season). Wikipedia usage accurately estimated the week of peak ILI activity 17% more often than Google Flu Trends data and was often more accurate in its measure of ILI intensity. With further study, this method could potentially be implemented for continuous monitoring of ILI activity in the US and to provide support for traditional influenza surveillance tools
Seeding systems and cropping trends in Saskatchewan results of a PFRA survey, 1997-2002
Non-Peer ReviewedFrom 1997 to 2002, Agriculture & Agri-Food Canada’s (AAFC) PFRA Branch conducted a
survey of over 4000 annually cropped fields in Saskatchewan. Each year the same fields were
visited shortly after crop emergence to collect information on crop type, row spacing, opener
type, packing system, amount of previous crop residue, orientation of previous crop stubble, and
adoption of low soil disturbance seeding. Key results are the increasing trend toward lower soil
disturbance seeding, and the high incidence of pulse crops associated with low disturbance
seeding. In depth analysis of trends on individual fields suggest that very few producers are able
to maintain low disturbance seeding every year on the same field. This suggests that some
flexibility is required to allow for periodic soil disturbance to address issues such as perennial
weeds
Strong, Weak and Branching Bisimulation for Transition Systems and Markov Reward Chains: A Unifying Matrix Approach
We first study labeled transition systems with explicit successful
termination. We establish the notions of strong, weak, and branching
bisimulation in terms of boolean matrix theory, introducing thus a novel and
powerful algebraic apparatus. Next we consider Markov reward chains which are
standardly presented in real matrix theory. By interpreting the obtained matrix
conditions for bisimulations in this setting, we automatically obtain the
definitions of strong, weak, and branching bisimulation for Markov reward
chains. The obtained strong and weak bisimulations are shown to coincide with
some existing notions, while the obtained branching bisimulation is new, but
its usefulness is questionable
Nonlinear optical probe of tunable surface electrons on a topological insulator
We use ultrafast laser pulses to experimentally demonstrate that the
second-order optical response of bulk single crystals of the topological
insulator BiSe is sensitive to its surface electrons. By performing
surface doping dependence measurements as a function of photon polarization and
sample orientation we show that second harmonic generation can simultaneously
probe both the surface crystalline structure and the surface charge of
BiSe. Furthermore, we find that second harmonic generation using
circularly polarized photons reveals the time-reversal symmetry properties of
the system and is surprisingly robust against surface charging, which makes it
a promising tool for spectroscopic studies of topological surfaces and buried
interfaces
Value Iteration for Long-run Average Reward in Markov Decision Processes
Markov decision processes (MDPs) are standard models for probabilistic
systems with non-deterministic behaviours. Long-run average rewards provide a
mathematically elegant formalism for expressing long term performance. Value
iteration (VI) is one of the simplest and most efficient algorithmic approaches
to MDPs with other properties, such as reachability objectives. Unfortunately,
a naive extension of VI does not work for MDPs with long-run average rewards,
as there is no known stopping criterion. In this work our contributions are
threefold. (1) We refute a conjecture related to stopping criteria for MDPs
with long-run average rewards. (2) We present two practical algorithms for MDPs
with long-run average rewards based on VI. First, we show that a combination of
applying VI locally for each maximal end-component (MEC) and VI for
reachability objectives can provide approximation guarantees. Second, extending
the above approach with a simulation-guided on-demand variant of VI, we present
an anytime algorithm that is able to deal with very large models. (3) Finally,
we present experimental results showing that our methods significantly
outperform the standard approaches on several benchmarks
A Quantitative Information Flow Analysis of the Topics API
Third-party cookies have been a privacy concern since cookies were first
developed in the mid 1990s, but more strict cookie policies were only
introduced by Internet browser vendors in the early 2010s. More recently, due
to regulatory changes, browser vendors have started to completely block
third-party cookies, with both Firefox and Safari already compliant.
The Topics API is being proposed by Google as an additional and less
intrusive source of information for interest-based advertising (IBA), following
the upcoming deprecation of third-party cookies. Initial results published by
Google estimate the probability of a correct re-identification of a random
individual would be below 3% while still supporting IBA.
In this paper, we analyze the re-identification risk for individual Internet
users introduced by the Topics API from the perspective of Quantitative
Information Flow (QIF), an information- and decision-theoretic framework. Our
model allows a theoretical analysis of both privacy and utility aspects of the
API and their trade-off, and we show that the Topics API does have better
privacy than third-party cookies. We leave the utility analyses for future
work.Comment: WPES '23 (to appear
Bloch Equations and Completely Positive Maps
The phenomenological dissipation of the Bloch equations is reexamined in the
context of completely positive maps. Such maps occur if the dissipation arises
from a reduction of a unitary evolution of a system coupled to a reservoir. In
such a case the reduced dynamics for the system alone will always yield
completely positive maps of the density operator. We show that, for Markovian
Bloch maps, the requirement of complete positivity imposes some Bloch
inequalities on the phenomenological damping constants. For non-Markovian Bloch
maps some kind of Bloch inequalities involving eigenvalues of the damping basis
can be established as well. As an illustration of these general properties we
use the depolarizing channel with white and colored stochastic noise.Comment: Talk given at the Conference "Quantum Challenges", Falenty, Poland,
September 4-7, 2003. 21 pages, 3 figure
A novel analysis of utility in privacy pipelines, using Kronecker products and quantitative information flow
We combine Kronecker products, and quantitative information flow, to give a
novel formal analysis for the fine-grained verification of utility in complex
privacy pipelines. The combination explains a surprising anomaly in the
behaviour of utility of privacy-preserving pipelines -- that sometimes a
reduction in privacy results also in a decrease in utility. We use the standard
measure of utility for Bayesian analysis, introduced by Ghosh at al., to
produce tractable and rigorous proofs of the fine-grained statistical behaviour
leading to the anomaly. More generally, we offer the prospect of
formal-analysis tools for utility that complement extant formal analyses of
privacy. We demonstrate our results on a number of common privacy-preserving
designs
Flexible and scalable privacy assessment for very large datasets, with an application to official governmental microdata
We present a systematic refactoring of the conventional treatment of privacy
analyses, basing it on mathematical concepts from the framework of Quantitative
Information Flow (QIF). The approach we suggest brings three principal
advantages: it is flexible, allowing for precise quantification and comparison
of privacy risks for attacks both known and novel; it can be computationally
tractable for very large, longitudinal datasets; and its results are
explainable both to politicians and to the general public. We apply our
approach to a very large case study: the Educational Censuses of Brazil,
curated by the governmental agency INEP, which comprise over 90 attributes of
approximately 50 million individuals released longitudinally every year since
2007. These datasets have only very recently (2018-2021) attracted legislation
to regulate their privacy -- while at the same time continuing to maintain the
openness that had been sought in Brazilian society. INEP's reaction to that
legislation was the genesis of our project with them. In our conclusions here
we share the scientific, technical, and communication lessons we learned in the
process
- …