23,863 research outputs found
An efficient implementation of a test for EA-equivalence
We implement an algorithm for testing EA-equivalence between vectorial Boolean functions proposed by Kaleyski in the C programming language, and observe that it reduces the running time (as opposed to the original Magma implementation of the algorithm) necessary to decide equivalence up to 300 times in many cases. Our implementation also significantly reduces the memory usage, and makes it possible to run the algorithms for dimensions from 10 onwards, which was impossible using the original implementation due to its memory consumption. Our approach allows us to reconstruct the exact form of the equivalence and to prove that two given functions are equivalent (for comparison, computing invariants for the functions, which is the approach typically used in practice, only allows us to show that two functions are not equivalent). Furthermore, our approach works for functions of any algebraic degree, while most existing approaches (such as invariants and other algorithms for EA-equivalence) are restricted to the quadratic case. We then adapt Kaleyski’s algorithm to test for linear and affine equivalence instead of EA-equivalence. We supply an implementation in C of this procedure as well. As an application, we show how this method can be used to test quadratic APN functions for EA-equivalence through the linear equivalence of their orthoderivatives. We observe that by taking this approach, we can reduce the time necessary for deciding EA-equivalence up to 20 times (as compared with our efficient C implementation from the previous paragraph). The downside compared to Kaleyski’s original algorithm is that this faster method makes it difficult to recover the exact form of the EA-equivalence between the tested APN functions. We confirm this by running some computational experiments in dimension 6, and observing that only one out of all possible linear equivalences between the orthoderivatives corresponds to the EA-equivalence between the APN functions in question. To the best of our knowledge, this is the first investigation into the exact relationship between the EA-equivalence of quadratic APN functions and the affine equivalence of their orthoderivatives given in the literature.Masteroppgave i informatikkINF399MAMN-INFMAMN-PRO
An SDP Approach For Solving Quadratic Fractional Programming Problems
This paper considers a fractional programming problem (P) which minimizes a
ratio of quadratic functions subject to a two-sided quadratic constraint. As is
well-known, the fractional objective function can be replaced by a parametric
family of quadratic functions, which makes (P) highly related to, but more
difficult than a single quadratic programming problem subject to a similar
constraint set. The task is to find the optimal parameter and then
look for the optimal solution if is attained. Contrasted with the
classical Dinkelbach method that iterates over the parameter, we propose a
suitable constraint qualification under which a new version of the S-lemma with
an equality can be proved so as to compute directly via an exact
SDP relaxation. When the constraint set of (P) is degenerated to become an
one-sided inequality, the same SDP approach can be applied to solve (P) {\it
without any condition}. We observe that the difference between a two-sided
problem and an one-sided problem lies in the fact that the S-lemma with an
equality does not have a natural Slater point to hold, which makes the former
essentially more difficult than the latter. This work does not, either, assume
the existence of a positive-definite linear combination of the quadratic terms
(also known as the dual Slater condition, or a positive-definite matrix
pencil), our result thus provides a novel extension to the so-called "hard
case" of the generalized trust region subproblem subject to the upper and the
lower level set of a quadratic function.Comment: 26 page
Recommended from our members
A comparison of general-purpose optimization algorithms forfinding optimal approximate experimental designs
Several common general purpose optimization algorithms are compared for findingA- and D-optimal designs for different types of statistical models of varying complexity,including high dimensional models with five and more factors. The algorithms of interestinclude exact methods, such as the interior point method, the Nelder–Mead method, theactive set method, the sequential quadratic programming, and metaheuristic algorithms,such as particle swarm optimization, simulated annealing and genetic algorithms.Several simulations are performed, which provide general recommendations on theutility and performance of each method, including hybridized versions of metaheuristicalgorithms for finding optimal experimental designs. A key result is that general-purposeoptimization algorithms, both exact methods and metaheuristic algorithms, perform wellfor finding optimal approximate experimental designs
- …