1,876 research outputs found
Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games
In a landmark paper, Papadimitriou and Roughgarden described a
polynomial-time algorithm ("Ellipsoid Against Hope") for computing sample
correlated equilibria of concisely-represented games. Recently, Stein, Parrilo
and Ozdaglar showed that this algorithm can fail to find an exact correlated
equilibrium, but can be easily modified to efficiently compute approximate
correlated equilibria. Currently, it remains unresolved whether the algorithm
can be modified to compute an exact correlated equilibrium. We show that it
can, presenting a variant of the Ellipsoid Against Hope algorithm that
guarantees the polynomial-time identification of exact correlated equilibrium.
Our new algorithm differs from the original primarily in its use of a
separation oracle that produces cuts corresponding to pure-strategy profiles.
As a result, we no longer face the numerical precision issues encountered by
the original approach, and both the resulting algorithm and its analysis are
considerably simplified. Our new separation oracle can be understood as a
derandomization of Papadimitriou and Roughgarden's original separation oracle
via the method of conditional probabilities. Also, the equilibria returned by
our algorithm are distributions with polynomial-sized supports, which are
simpler (in the sense of being representable in fewer bits) than the mixtures
of product distributions produced previously; no tractable algorithm has
previously been proposed for identifying such equilibria.Comment: 15 page
Machine learning for discovering laws of nature
A microscopic particle obeys the principles of quantum mechanics -- so where
is the sharp boundary between the macroscopic and microscopic worlds? It was
this "interpretation problem" that prompted Schr\"odinger to propose his famous
thought experiment (a cat that is simultaneously both dead and alive) and
sparked a great debate about the quantum measurement problem, and there is
still no satisfactory answer yet. This is precisely the inadequacy of rigorous
mathematical models in describing the laws of nature. We propose a
computational model to describe and understand the laws of nature based on
Darwin's natural selection. In fact, whether it's a macro particle, a micro
electron or a security, they can all be considered as an entity, the change of
this entity over time can be described by a data series composed of states and
values. An observer can learn from this data series to construct theories
(usually consisting of functions and differential equations). We don't model
with the usual functions or differential equations, but with a state Decision
Tree (determines the state of an entity) and a value Function Tree (determines
the distance between two points of an entity). A state Decision Tree and a
value Function Tree together can reconstruct an entity's trajectory and make
predictions about its future trajectory. Our proposed algorithmic model
discovers laws of nature by only learning observed historical data (sequential
measurement of observables) based on maximizing the observer's expected value.
There is no differential equation in our model; our model has an emphasis on
machine learning, where the observer builds up his/her experience by being
rewarded or punished for each decision he/she makes, and eventually leads to
rediscovering Newton's law, the Born rule (quantum mechanics) and the efficient
market hypothesis (financial market)
Research Brief: The Role of Tasks and Skills in Explaining the Disability Pay Gap
A disparity in pay exists between workers with and without disabilities. This gap persists even in analyses that control for a variety of factors and incorporate compensation benefits other than wages and salaries. To better understand the underlying sources of these differences, occupation-level data on employee skill and task requirements are considered. Evaluating the earnings gap with this additional information provides insights regarding the economic returns to certain workplace tasks and skills that may contribute to the earnings gap that we observe for people with disabilities
From Data Fusion to Knowledge Fusion
The task of {\em data fusion} is to identify the true values of data items
(eg, the true date of birth for {\em Tom Cruise}) among multiple observed
values drawn from different sources (eg, Web sites) of varying (and unknown)
reliability. A recent survey\cite{LDL+12} has provided a detailed comparison of
various fusion methods on Deep Web data. In this paper, we study the
applicability and limitations of different fusion techniques on a more
challenging problem: {\em knowledge fusion}. Knowledge fusion identifies true
subject-predicate-object triples extracted by multiple information extractors
from multiple information sources. These extractors perform the tasks of entity
linkage and schema alignment, thus introducing an additional source of noise
that is quite different from that traditionally considered in the data fusion
literature, which only focuses on factual errors in the original sources. We
adapt state-of-the-art data fusion techniques and apply them to a knowledge
base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B
Web pages, which is three orders of magnitude larger than the data sets used in
previous data fusion papers. We show great promise of the data fusion
approaches in solving the knowledge fusion problem, and suggest interesting
research directions through a detailed error analysis of the methods.Comment: VLDB'201
- …