658 research outputs found
Potts model, parametric maxflow and k-submodular functions
The problem of minimizing the Potts energy function frequently occurs in
computer vision applications. One way to tackle this NP-hard problem was
proposed by Kovtun [19,20]. It identifies a part of an optimal solution by
running maxflow computations, where is the number of labels. The number
of "labeled" pixels can be significant in some applications, e.g. 50-93% in our
tests for stereo. We show how to reduce the runtime to maxflow
computations (or one {\em parametric maxflow} computation). Furthermore, the
output of our algorithm allows to speed-up the subsequent alpha expansion for
the unlabeled part, or can be used as it is for time-critical applications.
To derive our technique, we generalize the algorithm of Felzenszwalb et al.
[7] for {\em Tree Metrics}. We also show a connection to {\em -submodular
functions} from combinatorial optimization, and discuss {\em -submodular
relaxations} for general energy functions.Comment: Accepted to ICCV 201
A Unified View of Piecewise Linear Neural Network Verification
The success of Deep Learning and its potential use in many safety-critical
applications has motivated research on formal verification of Neural Network
(NN) models. Despite the reputation of learned NN models to behave as black
boxes and the theoretical hardness of proving their properties, researchers
have been successful in verifying some classes of models by exploiting their
piecewise linear structure and taking insights from formal methods such as
Satisifiability Modulo Theory. These methods are however still far from scaling
to realistic neural networks. To facilitate progress on this crucial area, we
make two key contributions. First, we present a unified framework that
encompasses previous methods. This analysis results in the identification of
new methods that combine the strengths of multiple existing approaches,
accomplishing a speedup of two orders of magnitude compared to the previous
state of the art. Second, we propose a new data set of benchmarks which
includes a collection of previously released testcases. We use the benchmark to
provide the first experimental comparison of existing algorithms and identify
the factors impacting the hardness of verification problems.Comment: Updated version of "Piecewise Linear Neural Network verification: A
comparative study
A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport
In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set
Double Bubbles Minimize
The classical isoperimetric inequality in R^3 states that the surface of
smallest area enclosing a given volume is a sphere. We show that the least area
surface enclosing two equal volumes is a double bubble, a surface made of two
pieces of round spheres separated by a flat disk, meeting along a single circle
at an angle of 120 degrees.Comment: 57 pages, 32 figures. Includes the complete code for a C++ program as
described in the article. You can obtain this code by viewing the source of
this articl
- …