639 research outputs found
The Inverse Shapley Value Problem
For a weighted voting scheme used by voters to choose between two
candidates, the \emph{Shapley-Shubik Indices} (or {\em Shapley values}) of
provide a measure of how much control each voter can exert over the overall
outcome of the vote. Shapley-Shubik indices were introduced by Lloyd Shapley
and Martin Shubik in 1954 \cite{SS54} and are widely studied in social choice
theory as a measure of the "influence" of voters. The \emph{Inverse Shapley
Value Problem} is the problem of designing a weighted voting scheme which
(approximately) achieves a desired input vector of values for the
Shapley-Shubik indices. Despite much interest in this problem no provably
correct and efficient algorithm was known prior to our work.
We give the first efficient algorithm with provable performance guarantees
for the Inverse Shapley Value Problem. For any constant \eps > 0 our
algorithm runs in fixed poly time (the degree of the polynomial is
independent of \eps) and has the following performance guarantee: given as
input a vector of desired Shapley values, if any "reasonable" weighted voting
scheme (roughly, one in which the threshold is not too skewed) approximately
matches the desired vector of values to within some small error, then our
algorithm explicitly outputs a weighted voting scheme that achieves this vector
of Shapley values to within error \eps. If there is a "reasonable" voting
scheme in which all voting weights are integers at most \poly(n) that
approximately achieves the desired Shapley values, then our algorithm runs in
time \poly(n) and outputs a weighted voting scheme that achieves the target
vector of Shapley values to within error $\eps=n^{-1/8}.
Imaging Granulomatous Lesions with Optical Coherence Tomography
www.karger.com/cde This is an Open Access article licensed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License (www.karger.com/OA-license), applicable to the online version of the article only. Distribution for non-commercial purposes only
Continuous extremal optimization for Lennard-Jones Clusters
In this paper, we explore a general-purpose heuristic algorithm for finding
high-quality solutions to continuous optimization problems. The method, called
continuous extremal optimization(CEO), can be considered as an extension of
extremal optimization(EO) and is consisted of two components, one is with
responsibility for global searching and the other is with responsibility for
local searching. With only one adjustable parameter, the CEO's performance
proves competitive with more elaborate stochastic optimization procedures. We
demonstrate it on a well known continuous optimization problem: the
Lennerd-Jones clusters optimization problem.Comment: 5 pages and 3 figure
False-Name Manipulation in Weighted Voting Games is Hard for Probabilistic Polynomial Time
False-name manipulation refers to the question of whether a player in a
weighted voting game can increase her power by splitting into several players
and distributing her weight among these false identities. Analogously to this
splitting problem, the beneficial merging problem asks whether a coalition of
players can increase their power in a weighted voting game by merging their
weights. Aziz et al. [ABEP11] analyze the problem of whether merging or
splitting players in weighted voting games is beneficial in terms of the
Shapley-Shubik and the normalized Banzhaf index, and so do Rey and Rothe [RR10]
for the probabilistic Banzhaf index. All these results provide merely
NP-hardness lower bounds for these problems, leaving the question about their
exact complexity open. For the Shapley--Shubik and the probabilistic Banzhaf
index, we raise these lower bounds to hardness for PP, "probabilistic
polynomial time", and provide matching upper bounds for beneficial merging and,
whenever the number of false identities is fixed, also for beneficial
splitting, thus resolving previous conjectures in the affirmative. It follows
from our results that beneficial merging and splitting for these two power
indices cannot be solved in NP, unless the polynomial hierarchy collapses,
which is considered highly unlikely
Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces
The \emph{Chow parameters} of a Boolean function
are its degree-0 and degree-1 Fourier coefficients. It has been known
since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of
any linear threshold function uniquely specify within the space of all
Boolean functions, but until recently (O'Donnell and Servedio) nothing was
known about efficient algorithms for \emph{reconstructing} (exactly or
approximately) from exact or approximate values of its Chow parameters. We
refer to this reconstruction problem as the \emph{Chow Parameters Problem.}
Our main result is a new algorithm for the Chow Parameters Problem which,
given (sufficiently accurate approximations to) the Chow parameters of any
linear threshold function , runs in time \tilde{O}(n^2)\cdot
(1/\eps)^{O(\log^2(1/\eps))} and with high probability outputs a
representation of an LTF that is \eps-close to . The only previous
algorithm (O'Donnell and Servedio) had running time \poly(n) \cdot
2^{2^{\tilde{O}(1/\eps^2)}}.
As a byproduct of our approach, we show that for any linear threshold
function over , there is a linear threshold function which
is \eps-close to and has all weights that are integers at most \sqrt{n}
\cdot (1/\eps)^{O(\log^2(1/\eps))}. This significantly improves the best
previous result of Diakonikolas and Servedio which gave a \poly(n) \cdot
2^{\tilde{O}(1/\eps^{2/3})} weight bound, and is close to the known lower
bound of (1/\eps)^{\Omega(\log \log (1/\eps))}\} (Goldberg,
Servedio). Our techniques also yield improved algorithms for related problems
in learning theory
Memory with memory in genetic programming
We introduce Memory with Memory Genetic Programming (MwM-GP), where we use soft assignments and soft return operations. Instead of having the new value completely overwrite the old value of registers or memory, soft assignments combine such values. Similarly, in soft return operations the value of a function node is a blend between the result of a calculation and previously returned results. In extensive empirical tests, MwM-GP almost always does as well as traditional GP, while significantly outperforming it in several cases. MwM-GP also tends to be far more consistent than traditional GP. The data suggest that MwM-GP works by successively refining an approximate solution to the target problem and that it is much less likely to have truly ineffective code. MwM-GP can continue to improve over time, but it is less likely to get the sort of exact solution that one might find with traditional GP
Theoretical analysis of the role of chromatin interactions in long-range action of enhancers and insulators
Long-distance regulatory interactions between enhancers and their target
genes are commonplace in higher eukaryotes. Interposed boundaries or insulators
are able to block these long distance regulatory interactions. The mechanistic
basis for insulator activity and how it relates to enhancer
action-at-a-distance remains unclear. Here we explore the idea that topological
loops could simultaneously account for regulatory interactions of distal
enhancers and the insulating activity of boundary elements. We show that while
loop formation is not in itself sufficient to explain action at a distance,
incorporating transient non-specific and moderate attractive interactions
between the chromatin fibers strongly enhances long-distance regulatory
interactions and is sufficient to generate a euchromatin-like state. Under
these same conditions, the subdivision of the loop into two topologically
independent loops by insulators inhibits inter-domain interactions. The
underlying cause of this effect is a suppression of crossings in the contact
map at intermediate distances. Thus our model simultaneously accounts for
regulatory interactions at a distance and the insulator activity of boundary
elements. This unified model of the regulatory roles of chromatin loops makes
several testable predictions that could be confronted with \emph{in vitro}
experiments, as well as genomic chromatin conformation capture and fluorescent
microscopic approaches.Comment: 10 pages, originally submitted to an (undisclosed) journal in May
201
Discovering Adaptable Symbolic Algorithms from Scratch
Autonomous robots deployed in the real world will need control policies that
rapidly adapt to environmental changes. To this end, we propose
AutoRobotics-Zero (ARZ), a method based on AutoML-Zero that discovers zero-shot
adaptable policies from scratch. In contrast to neural network adaption
policies, where only model parameters are optimized, ARZ can build control
algorithms with the full expressive power of a linear register machine. We
evolve modular policies that tune their model parameters and alter their
inference algorithm on-the-fly to adapt to sudden environmental changes. We
demonstrate our method on a realistic simulated quadruped robot, for which we
evolve safe control policies that avoid falling when individual limbs suddenly
break. This is a challenging task in which two popular neural network baselines
fail. Finally, we conduct a detailed analysis of our method on a novel and
challenging non-stationary control task dubbed Cataclysmic Cartpole. Results
confirm our findings that ARZ is significantly more robust to sudden
environmental changes and can build simple, interpretable control policies.Comment: Published as a conference paper at International Conference on
Intelligent Robots and Systems (IROS) 2023. See https://youtu.be/sEFP1Hay4nE
for associated video fil
Public Benefits of Undeveloped Lands on Urban Outskirts: Non-Market Valuation Studies and their Role in Land Use Plans
Over the past three decades, the economics profession has developed methods for estimating the public benefits of green spaces, providing an opportunity to incorporate such information into land-use planning. While federal regulations routinely require such estimates for major regulations, the extent to which they are used in local land use plans is not clear. This paper reviews the literature on public values for lands on urban outskirts, not just to survey their methods or empirical findings, but to evaluate the role they have played--or have the potential to play-- in actual land use plans. Based on interviews with authors and representatives of funding agencies and local land trusts, it appears that academic work has had a mixed reception in the policy world. Reasons for this include a lack of interest in making academic work accessible to policy makers, emphasizing revealed preference methods which are inconsistent with policy priorities related to nonuse values, and emphasis on benefit-cost analyses. Nevertheless, there are examples of success stories that illustrate how such information can play a vital role in the design of conservation policies. Working Paper 07-2
- …