1,880 research outputs found
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
An empirical methodology for developing stockmarket trading systems using artificial neural networks
Cell assembly dynamics of sparsely-connected inhibitory networks: a simple model for the collective activity of striatal projection neurons
Striatal projection neurons form a sparsely-connected inhibitory network, and
this arrangement may be essential for the appropriate temporal organization of
behavior. Here we show that a simplified, sparse inhibitory network of
Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal
population activity, as observed in brain slices [Carrillo-Reid et al., J.
Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to
determine the conditions under which sparse inhibitory networks form
anti-correlated cell assemblies with time-varying activity of individual cells.
We found that under these conditions the network displays an input-specific
sequence of cell assembly switching, that effectively discriminates similar
inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9
(2013) e1002954] that GABAergic connections between striatal projection neurons
allow stimulus-selective, temporally-extended sequential activation of cell
assemblies. Furthermore, we help to show how altered intrastriatal GABAergic
signaling may produce aberrant network-level information processing in
disorders such as Parkinson's and Huntington's diseases.Comment: 22 pages, 9 figure
Towards explaining the speed of -means
The -means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the -means method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization to Bregman divergences
Order and disorder in the Local Evolutionary Minority Game
We study a modification of the Evolutionary Minority Game (EMG) in which
agents are placed in the nodes of a regular or a random graph. A neighborhood
for each agent can thus be defined and a modification of the usual relaxation
dynamics can be made in which each agent updates her decision scheme depending
upon the options made in her immediate neighborhood. We name this model the
Local Evolutionary Minority Game (LEMG). We report numerical results for the
topologies of a ring, a torus and a random graph changing the size of the
neighborhood. We focus our discussion in a one dimensional system and perform a
detailed comparison of the results obtained from the random relaxation dynamics
of the LEMG and from a linear chain of interacting spin-like variables at a
finite temperature. We provide a physical interpretation of the surprising
result that in the LEMG a better coordination (a lower frustration) is achieved
if agents base their actions on local information. We show how the LEMG can be
regarded as a model that gradually interpolates between a fully ordered,
antiferromagnetic system and a fully disordered system that can be assimilated
to a spin glass.Comment: 12 pages, 8 figures, RevTex; omission of a relevant reference
correcte
How to allocate Research (and other) Subsidies
A budget-constrained buyer wants to purchase items from a short-listed set. Items are differentiated by observable quality and sellers have private reserve prices for their items. The buyer’s problem is to select a subset of maximal quality. Money does not enter the buyer’s objective function, but only his constraints. Sellers quote prices strategically, inducing a knapsack game. We derive the Bayesian optimal mechanism for the buyer’s problem. We ?nd that simultaneous take-it-or-leave-it offers are optimal. Hence, somewhat surprisingly, ex-postcompetition is not required to implement optimality. Finally, we discuss the problem in a detail free setting
Lower Bounds for the Average and Smoothed Number of Pareto Optima
Smoothed analysis of multiobjective 0-1 linear optimization has drawn
considerable attention recently. The number of Pareto-optimal solutions (i.e.,
solutions with the property that no other solution is at least as good in all
the coordinates and better in at least one) for multiobjective optimization
problems is the central object of study. In this paper, we prove several lower
bounds for the expected number of Pareto optima. Our basic result is a lower
bound of \Omega_d(n^(d-1)) for optimization problems with d objectives and n
variables under fairly general conditions on the distributions of the linear
objectives. Our proof relates the problem of lower bounding the number of
Pareto optima to results in geometry connected to arrangements of hyperplanes.
We use our basic result to derive (1) To our knowledge, the first lower bound
for natural multiobjective optimization problems. We illustrate this for the
maximum spanning tree problem with randomly chosen edge weights. Our technique
is sufficiently flexible to yield such lower bounds for other standard
objective functions studied in this setting (such as, multiobjective shortest
path, TSP tour, matching). (2) Smoothed lower bound of min {\Omega_d(n^(d-1.5)
\phi^{(d-log d) (1-\Theta(1/\phi))}), 2^{\Theta(n)}}$ for the 0-1 knapsack
problem with d profits for phi-semirandom distributions for a version of the
knapsack problem. This improves the recent lower bound of Brunsch and Roeglin
WARNING: Physics Envy May Be Hazardous To Your Wealth!
The quantitative aspirations of economists and financial analysts have for
many years been based on the belief that it should be possible to build models
of economic systems - and financial markets in particular - that are as
predictive as those in physics. While this perspective has led to a number of
important breakthroughs in economics, "physics envy" has also created a false
sense of mathematical precision in some cases. We speculate on the origins of
physics envy, and then describe an alternate perspective of economic behavior
based on a new taxonomy of uncertainty. We illustrate the relevance of this
taxonomy with two concrete examples: the classical harmonic oscillator with
some new twists that make physics look more like economics, and a quantitative
equity market-neutral strategy. We conclude by offering a new interpretation of
tail events, proposing an "uncertainty checklist" with which our taxonomy can
be implemented, and considering the role that quants played in the current
financial crisis.Comment: v3 adds 2 reference
- …