1,071 research outputs found
CS 3100/5100: Data Structures and Algorithms
This course will cover the fundamentals of algorithm design and analysis, the implementation of classical data structures and control structures, and the basic problem solving techniques
The magic of algorithm design and analysis: teaching algorithmic skills using magic card tricks
We describe our experience using magic card tricks to teach algorithmic skills to first-year Computer Science undergraduates. We illustrate our approach with a detailed discussion on a card trick that is typically presented as a test to the psychic abilities of an audience. We use the trick to discuss concepts like problem decomposition, pre- and post-conditions, and invariants. We discuss pedagogical issues and analyse feedback collected from students. The feedback has been very positive and encouraging.(undefined
CS 400/600: Data Structures and Software Design
This course will cover the implementation of classical data structures and control structures, an introduction to the fundamentals of algorithm design and analysis, and the basic problem solving techniques
Online and Stochastic Gradient Methods for Non-decomposable Loss Functions
Modern applications in sensitive domains such as biometrics and medicine
frequently require the use of non-decomposable loss functions such as
precision@k, F-measure etc. Compared to point loss functions such as
hinge-loss, these offer much more fine grained control over prediction, but at
the same time present novel challenges in terms of algorithm design and
analysis. In this work we initiate a study of online learning techniques for
such non-decomposable loss functions with an aim to enable incremental learning
as well as design scalable solvers for batch problems. To this end, we propose
an online learning framework for such loss functions. Our model enjoys several
nice properties, chief amongst them being the existence of efficient online
learning algorithms with sublinear regret and online to batch conversion
bounds. Our model is a provable extension of existing online learning models
for point loss functions. We instantiate two popular losses, prec@k and pAUC,
in our model and prove sublinear regret bounds for both of them. Our proofs
require a novel structural lemma over ranked lists which may be of independent
interest. We then develop scalable stochastic gradient descent solvers for
non-decomposable loss functions. We show that for a large family of loss
functions satisfying a certain uniform convergence property (that includes
prec@k, pAUC, and F-measure), our methods provably converge to the empirical
risk minimizer. Such uniform convergence results were not known for these
losses and we establish these using novel proof techniques. We then use
extensive experimentation on real life and benchmark datasets to establish that
our method can be orders of magnitude faster than a recently proposed cutting
plane method.Comment: 25 pages, 3 figures, To appear in the proceedings of the 28th Annual
Conference on Neural Information Processing Systems, NIPS 201
Large Cuts with Local Algorithms on Triangle-Free Graphs
We study the problem of finding large cuts in -regular triangle-free
graphs. In prior work, Shearer (1992) gives a randomised algorithm that finds a
cut of expected size , where is the number of
edges. We give a simpler algorithm that does much better: it finds a cut of
expected size . As a corollary, this shows that in
any -regular triangle-free graph there exists a cut of at least this size.
Our algorithm can be interpreted as a very efficient randomised distributed
algorithm: each node needs to produce only one random bit, and the algorithm
runs in one synchronous communication round. This work is also a case study of
applying computational techniques in the design of distributed algorithms: our
algorithm was designed by a computer program that searched for optimal
algorithms for small values of .Comment: 1+17 pages, 8 figure
- …