911 research outputs found
The Maximum Likelihood Threshold of a Graph
The maximum likelihood threshold of a graph is the smallest number of data
points that guarantees that maximum likelihood estimates exist almost surely in
the Gaussian graphical model associated to the graph. We show that this graph
parameter is connected to the theory of combinatorial rigidity. In particular,
if the edge set of a graph is an independent set in the -dimensional
generic rigidity matroid, then the maximum likelihood threshold of is less
than or equal to . This connection allows us to prove many results about the
maximum likelihood threshold.Comment: Added Section 6 and Section
Matchings, coverings, and Castelnuovo-Mumford regularity
We show that the co-chordal cover number of a graph G gives an upper bound
for the Castelnuovo-Mumford regularity of the associated edge ideal. Several
known combinatorial upper bounds of regularity for edge ideals are then easy
consequences of covering results from graph theory, and we derive new upper
bounds by looking at additional covering results.Comment: 12 pages; v4 has minor changes for publicatio
Positive independence densities of finite rank countable hypergraphs are achieved by finite hypergraphs
The independence density of a finite hypergraph is the probability that a
subset of vertices, chosen uniformly at random contains no hyperedges.
Independence densities can be generalized to countable hypergraphs using
limits. We show that, in fact, every positive independence density of a
countably infinite hypergraph with hyperedges of bounded size is equal to the
independence density of some finite hypergraph whose hyperedges are no larger
than those in the infinite hypergraph. This answers a question of Bonato,
Brown, Kemkes, and Pra{\l}at about independence densities of graphs.
Furthermore, we show that for any , the set of independence densities of
hypergraphs with hyperedges of size at most is closed and contains no
infinite increasing sequences.Comment: To appear in the European Journal of Combinatorics, 12 page
Learning from networked examples
Many machine learning algorithms are based on the assumption that training
examples are drawn independently. However, this assumption does not hold
anymore when learning from a networked sample because two or more training
examples may share some common objects, and hence share the features of these
shared objects. We show that the classic approach of ignoring this problem
potentially can have a harmful effect on the accuracy of statistics, and then
consider alternatives. One of these is to only use independent examples,
discarding other information. However, this is clearly suboptimal. We analyze
sample error bounds in this networked setting, providing significantly improved
results. An important component of our approach is formed by efficient sample
weighting schemes, which leads to novel concentration inequalities
- …