2,146 research outputs found
Flexible constrained sampling with guarantees for pattern mining
Pattern sampling has been proposed as a potential solution to the infamous
pattern explosion. Instead of enumerating all patterns that satisfy the
constraints, individual patterns are sampled proportional to a given quality
measure. Several sampling algorithms have been proposed, but each of them has
its limitations when it comes to 1) flexibility in terms of quality measures
and constraints that can be used, and/or 2) guarantees with respect to sampling
accuracy. We therefore present Flexics, the first flexible pattern sampler that
supports a broad class of quality measures and constraints, while providing
strong guarantees regarding sampling accuracy. To achieve this, we leverage the
perspective on pattern mining as a constraint satisfaction problem and build
upon the latest advances in sampling solutions in SAT as well as existing
pattern mining algorithms. Furthermore, the proposed algorithm is applicable to
a variety of pattern languages, which allows us to introduce and tackle the
novel task of sampling sets of patterns. We introduce and empirically evaluate
two variants of Flexics: 1) a generic variant that addresses the well-known
itemset sampling task and the novel pattern set sampling task as well as a wide
range of expressive constraints within these tasks, and 2) a specialized
variant that exploits existing frequent itemset techniques to achieve
substantial speed-ups. Experiments show that Flexics is both accurate and
efficient, making it a useful tool for pattern-based data exploration.Comment: Accepted for publication in Data Mining & Knowledge Discovery journal
(ECML/PKDD 2017 journal track
Quantum computing classical physics
In the past decade quantum algorithms have been found which outperform the
best classical solutions known for certain classical problems as well as the
best classical methods known for simulation of certain quantum systems. This
suggests that they may also speed up the simulation of some classical systems.
I describe one class of discrete quantum algorithms which do so--quantum
lattice gas automata--and show how to implement them efficiently on standard
quantum computers.Comment: 13 pages, plain TeX, 10 PostScript figures included with epsf.tex;
for related work see http://math.ucsd.edu/~dmeyer/research.htm
Two Structural Results for Low Degree Polynomials and Applications
In this paper, two structural results concerning low degree polynomials over
finite fields are given. The first states that over any finite field
, for any polynomial on variables with degree , there exists a subspace of with dimension on which is constant. This result is shown to be tight.
Stated differently, a degree polynomial cannot compute an affine disperser
for dimension smaller than . Using a recursive
argument, we obtain our second structural result, showing that any degree
polynomial induces a partition of to affine subspaces of dimension
, such that is constant on each part.
We extend both structural results to more than one polynomial. We further
prove an analog of the first structural result to sparse polynomials (with no
restriction on the degree) and to functions that are close to low degree
polynomials. We also consider the algorithmic aspect of the two structural
results.
Our structural results have various applications, two of which are:
* Dvir [CC 2012] introduced the notion of extractors for varieties, and gave
explicit constructions of such extractors over large fields. We show that over
any finite field, any affine extractor is also an extractor for varieties with
related parameters. Our reduction also holds for dispersers, and we conclude
that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over
.
* Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine
disperser over a prime field is also an affine extractor with related
parameters. Using our structural results, and based on the work of Kaufman and
Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this
result to any constant degree
Efficient hardware implementations of high throughput SHA-3 candidates keccak, luffa and blue midnight wish for single- and multi-message hashing
In November 2007 NIST announced that it would organize the SHA-3 competition to select a new cryptographic hash function family by 2012. In the selection process, hardware performances of the candidates will play an important role. Our analysis of previously proposed hardware implementations shows that three SHA-3 candidate algorithms can provide superior performance in hardware: Keccak, Luffa and Blue Midnight Wish (BMW). In this paper, we provide efficient and fast hardware implementations of these three algorithms. Considering both single- and multi-message hashing applications with an emphasis on both speed and efficiency, our work presents more comprehensive analysis of their hardware performances by providing different performance figures for different target devices. To our best knowledge, this is the first work that provides a comparative analysis of SHA-3 candidates in multi-message applications. We discover that BMW algorithm can provide much higher throughput than previously reported if used in multi-message hashing. We also show that better utilization of resources can increase speed via different configurations. We implement our designs using Verilog HDL, and map to both ASIC and FPGA devices (Spartan3, Virtex2, and Virtex 4) to give a better comparison with those in the literature. We report total area, maximum frequency, maximum throughput and throughput/area of the designs for all target devices. Given that the selection process for SHA3 is still open; our results will be instrumental to evaluate the hardware performance of the candidates
Second-generation PLINK: rising to the challenge of larger and richer datasets
PLINK 1 is a widely used open-source C/C++ toolset for genome-wide
association studies (GWAS) and research in population genetics. However, the
steady accumulation of data from imputation and whole-genome sequencing studies
has exposed a strong need for even faster and more scalable implementations of
key functions. In addition, GWAS and population-genetic data now frequently
contain probabilistic calls, phase information, and/or multiallelic variants,
none of which can be represented by PLINK 1's primary data format.
To address these issues, we are developing a second-generation codebase for
PLINK. The first major release from this codebase, PLINK 1.9, introduces
extensive use of bit-level parallelism, O(sqrt(n))-time/constant-space
Hardy-Weinberg equilibrium and Fisher's exact tests, and many other algorithmic
improvements. In combination, these changes accelerate most operations by 1-4
orders of magnitude, and allow the program to handle datasets too large to fit
in RAM. This will be followed by PLINK 2.0, which will introduce (a) a new data
format capable of efficiently representing probabilities, phase, and
multiallelic variants, and (b) extensions of many functions to account for the
new types of information.
The second-generation versions of PLINK will offer dramatic improvements in
performance and compatibility. For the first time, users without access to
high-end computing resources can perform several essential analyses of the
feature-rich and very large genetic datasets coming into use.Comment: 2 figures, 1 additional fil
- …