855 research outputs found
A cost function for similarity-based hierarchical clustering
The development of algorithms for hierarchical clustering has been hampered
by a shortage of precise objective functions. To help address this situation,
we introduce a simple cost function on hierarchies over a set of points, given
pairwise similarities between those points. We show that this criterion behaves
sensibly in canonical instances and that it admits a top-down construction
procedure with a provably good approximation ratio
The first order convergence law fails for random perfect graphs
We consider first order expressible properties of random perfect graphs. That
is, we pick a graph uniformly at random from all (labelled) perfect
graphs on vertices and consider the probability that it satisfies some
graph property that can be expressed in the first order language of graphs. We
show that there exists such a first order expressible property for which the
probability that satisfies it does not converge as .Comment: 11 pages. Minor corrections since last versio
A Remark on Unified Error Exponents: Hypothesis Testing, Data Compression and Measure Concentration
Let A be finite set equipped with a probability distribution P, and let M be a âmassâ function on A. A characterization is given for the most efficient way in which A n can be covered using spheres of a fixed radius. A covering is a subset C n of A n with the property that most of the elements of A n are within some fixed distance from at least one element of C n , and âmost of the elementsâ means a set whose probability is exponentially close to one (with respect to the product distribution P n ). An efficient covering is one with small mass M n (C n ). With different choices for the geometry on A, this characterization gives various corollaries as special cases, including Martonâs error-exponents theorem in lossy data compression, Hoeffdingâs optimal hypothesis testing exponents, and a new sharp converse to some measure concentration inequalities on discrete spaces
Sexual coloration and sperm performance in the Australian painted dragon lizard, Ctenophorus pictus
Theory predicts trade-offs between pre- and post-copulatory sexually selected traits. This relationship may be mediated by the degree to which males are able to monopolize access to females, as this will place an upper limit on the strength of post-copulatory selection. Furthermore, traits that aid in mate monopolization may be costly to maintain and may limit investment in post-copulatory traits, such as sperm performance. Australian painted dragons are polymorphic for the presence or absence of a yellow gular patch (\u27bibs\u27), which may aid them to monopolize access to females. Previous work has shown that there are physiological costs of carrying this bib (greater loss of body condition in the wild). Here, we show that male painted dragons use this bright yellow bib as both an inter- and intrasexual signal, and we assess whether this signal is traded off against sperm performance within the same individuals. We found no relationship between aspects of bib colour and sperm swimming velocity or percentage of motile sperm and suggest that the bib polymorphism may be maintained by complex interactions between physiological or life-history traits including other sperm or ejaculate traits and environmental influences
Counting Hamilton cycles in sparse random directed graphs
Let D(n,p) be the random directed graph on n vertices where each of the
n(n-1) possible arcs is present independently with probability p. A celebrated
result of Frieze shows that if then D(n,p) typically
has a directed Hamilton cycle, and this is best possible. In this paper, we
obtain a strengthening of this result, showing that under the same condition,
the number of directed Hamilton cycles in D(n,p) is typically
. We also prove a hitting-time version of this statement,
showing that in the random directed graph process, as soon as every vertex has
in-/out-degrees at least 1, there are typically
directed Hamilton cycles
Pattern Reduction in Paper Cutting
A large part of the paper industry involves supplying customers with reels of specified width in specifed quantities. These 'customer reels' must be cut from a set of wider 'jumbo reels', in as economical a way as possible. The first priority is to minimize the waste, i.e. to satisfy the customer demands using as few jumbo reels as possible. This is an example of the one-dimensional cutting stock problem, which has an extensive literature. Greycon have developed cutting stock algorithms which they include in their software packages.
Greycon's initial presentation to the Study Group posed several questions, which are listed below, along with (partial) answers arising from the work described in this report.
(1) Given a minimum-waste solution, what is the minimum number of patterns required?
It is shown in Section 2 that even when all the patterns appearing in minimum-waste solutions are known, determining the minimum number of patterns may be hard. It seems unlikely that one can guarantee to find the minimum number of patterns for large classes of realistic problems with only a few seconds on a PC available.
(2) Given an n â n-1 algorithm, will it find an optimal solution to the minimum- pattern problem?
There are problems for which n â n - 1 reductions are not possible although a more dramatic reduction is.
(3) Is there an efficient n â n-1 algorithm?
In light of Question 2, Question 3 should perhaps be rephrased as 'Is there an efficient algorithm to reduce n patterns?' However, if an algorithm guaranteed to find some reduction whenever one existed then it could be applied iteratively to minimize the number of patterns, and we have seen this cannot be done easily.
(4) Are there efficient 5 â 4 and 4 â 3 algorithms?
(5) Is it worthwhile seeking alternatives to greedy heuristics?
In response to Questions 4 and 5, we point to the algorithm described in the report, or variants of it. Such approaches seem capable of catching many higher reductions.
(6) Is there a way to find solutions with the smallest possible number of single patterns?
The Study Group did not investigate methods tailored specifically to this task, but the algorithm proposed here seems to do reasonably well. It will not increase the number of singleton patterns under any circumstances, and when the number of singletons is high there will be many possible moves that tend to eliminate them.
(7) Can a solution be found which reduces the number of knife changes?
The algorithm will help to reduce the number of necessary knife changes because it works by bringing patterns closer together, even if this does not proceed fully to a pattern reduction. If two patterns are equal across some of the customer widths, the knives for these reels need not be changed when moving from one to the other
Six Peaks Visible in the Redshift Distribution of 46,400 SDSS Quasars Agree with the Preferred Redshifts Predicted by the Decreasing Intrinsic Redshift Model
The redshift distribution of all 46,400 quasars in the Sloan Digital Sky
Survey (SDSS) Quasar Catalog III, Third Data Release, is examined. Six Peaks
that fall within the redshift window below z = 4, are visible. Their positions
agree with the preferred redshift values predicted by the decreasing intrinsic
redshift (DIR) model, even though this model was derived using completely
independent evidence. A power spectrum analysis of the full dataset confirms
the presence of a single, significant power peak at the expected redshift
period. Power peaks with the predicted period are also obtained when the upper
and lower halves of the redshift distribution are examined separately. The
periodicity detected is in linear z, as opposed to log(1+z). Because the peaks
in the SDSS quasar redshift distribution agree well with the preferred
redshifts predicted by the intrinsic redshift relation, we conclude that this
relation, and the peaks in the redshift distribution, likely both have the same
origin, and this may be intrinsic redshifts, or a common selection effect.
However, because of the way the intrinsic redshift relation was determined it
seems unlikely that one selection effect could have been responsible for both.Comment: 12 pages, 12 figure, accepted for publication in the Astrophysical
Journa
Approximating Nash Equilibria in Tree Polymatrix Games
We develop a quasi-polynomial time Las Vegas algorithm for approximating Nash equilibria in polymatrix games over trees, under a mild renormalizing assumption. Our result, in particular, leads to an expected polynomial-time algorithm for computing approximate Nash equilibria of tree polymatrix games in which the number of actions per player is a fixed constant. Further, for trees with constant degree, the running time of the algorithm matches the best known upper bound for approximating Nash equilibria in bimatrix games (Lipton, Markakis, and Mehta 2003).
Notably, this work closely complements the hardness result of Rubinstein (2015), which establishes the inapproximability of Nash equilibria in polymatrix games over constant-degree bipartite graphs with two actions per player
Algorithmic Analysis of Qualitative and Quantitative Termination Problems for Affine Probabilistic Programs
In this paper, we consider termination of probabilistic programs with
real-valued variables. The questions concerned are:
1. qualitative ones that ask (i) whether the program terminates with
probability 1 (almost-sure termination) and (ii) whether the expected
termination time is finite (finite termination); 2. quantitative ones that ask
(i) to approximate the expected termination time (expectation problem) and (ii)
to compute a bound B such that the probability to terminate after B steps
decreases exponentially (concentration problem).
To solve these questions, we utilize the notion of ranking supermartingales
which is a powerful approach for proving termination of probabilistic programs.
In detail, we focus on algorithmic synthesis of linear ranking-supermartingales
over affine probabilistic programs (APP's) with both angelic and demonic
non-determinism. An important subclass of APP's is LRAPP which is defined as
the class of all APP's over which a linear ranking-supermartingale exists.
Our main contributions are as follows. Firstly, we show that the membership
problem of LRAPP (i) can be decided in polynomial time for APP's with at most
demonic non-determinism, and (ii) is NP-hard and in PSPACE for APP's with
angelic non-determinism; moreover, the NP-hardness result holds already for
APP's without probability and demonic non-determinism. Secondly, we show that
the concentration problem over LRAPP can be solved in the same complexity as
for the membership problem of LRAPP. Finally, we show that the expectation
problem over LRAPP can be solved in 2EXPTIME and is PSPACE-hard even for APP's
without probability and non-determinism (i.e., deterministic programs). Our
experimental results demonstrate the effectiveness of our approach to answer
the qualitative and quantitative questions over APP's with at most demonic
non-determinism.Comment: 24 pages, full version to the conference paper on POPL 201
- âŠ