1,532 research outputs found
Eliminating Variables in Boolean Equation Systems
Systems of Boolean equations of low degree arise in a natural way when
analyzing block ciphers. The cipher's round functions relate the secret key to
auxiliary variables that are introduced by each successive round. In algebraic
cryptanalysis, the attacker attempts to solve the resulting equation system in
order to extract the secret key. In this paper we study algorithms for
eliminating the auxiliary variables from these systems of Boolean equations. It
is known that elimination of variables in general increases the degree of the
equations involved. In order to contain computational complexity and storage
complexity, we present two new algorithms for performing elimination while
bounding the degree at , which is the lowest possible for elimination.
Further we show that the new algorithms are related to the well known \emph{XL}
algorithm. We apply the algorithms to a downscaled version of the LowMC cipher
and to a toy cipher based on the Prince cipher, and report on experimental
results pertaining to these examples.Comment: 21 pages, 3 figures, Journal pape
High performance reduced order modeling techniques based on optimal energy quadrature: application to geometrically non-linear multiscale inelastic material modeling
A High-Performance Reduced-Order Model (HPROM) technique, previously presented by the authors in the context of hierarchical multiscale models for non linear-materials undergoing infinitesimal strains, is generalized to deal with large deformation elasto-plastic problems. The proposed HPROM technique uses a Proper Orthogonal Decomposition procedure to build a reduced basis of the primary kinematical variable of the micro-scale problem, defined in terms of the micro-deformation gradient fluctuations. Then a Galerkin-projection, onto this reduced basis, is utilized to reduce the dimensionality of the micro-force balance equation, the stress homogenization equation and the effective macro-constitutive tangent tensor equation. Finally, a reduced goal-oriented quadrature rule is introduced to compute the non-affine terms of these equations. Main importance in this paper is given to the numerical assessment of the developed HPROM technique. The numerical experiments are performed on a micro-cell simulating a randomly distributed set of elastic inclusions embedded into an elasto-plastic matrix. This micro-structure is representative of a typical ductile metallic alloy. The HPROM technique applied to this type of problem displays high computational speed-ups, increasing with the complexity of the finite element model. From these results, we conclude that the proposed HPROM technique is an effective computational tool for modeling, with very large speed-ups and acceptable accuracy levels with respect to the high-fidelity case, the multiscale behavior of heterogeneous materials subjected to large deformations involving two well-separated scales of length.Peer ReviewedPostprint (author's final draft
Quantify resilience enhancement of UTS through exploiting connect community and internet of everything emerging technologies
This work aims at investigating and quantifying the Urban Transport System
(UTS) resilience enhancement enabled by the adoption of emerging technology
such as Internet of Everything (IoE) and the new trend of the Connected
Community (CC). A conceptual extension of Functional Resonance Analysis Method
(FRAM) and its formalization have been proposed and used to model UTS
complexity. The scope is to identify the system functions and their
interdependencies with a particular focus on those that have a relation and
impact on people and communities. Network analysis techniques have been applied
to the FRAM model to identify and estimate the most critical community-related
functions. The notion of Variability Rate (VR) has been defined as the amount
of output variability generated by an upstream function that can be
tolerated/absorbed by a downstream function, without significantly increasing
of its subsequent output variability. A fuzzy based quantification of the VR on
expert judgment has been developed when quantitative data are not available.
Our approach has been applied to a critical scenario (water bomb/flash
flooding) considering two cases: when UTS has CC and IoE implemented or not.
The results show a remarkable VR enhancement if CC and IoE are deploye
An Improvement over the GVW Algorithm for Inhomogeneous Polynomial Systems
The GVW algorithm is a signature-based algorithm for computing Gr\"obner
bases. If the input system is not homogeneous, some J-pairs with higher
signatures but lower degrees are rejected by GVW's Syzygy Criterion, instead,
GVW have to compute some J-pairs with lower signatures but higher degrees.
Consequently, degrees of polynomials appearing during the computations may
unnecessarily grow up higher and the computation become more expensive. In this
paper, a variant of the GVW algorithm, called M-GVW, is proposed and mutant
pairs are introduced to overcome inconveniences brought by inhomogeneous input
polynomials. Some techniques from linear algebra are used to improve the
efficiency. Both GVW and M-GVW have been implemented in C++ and tested by many
examples from boolean polynomial rings. The timings show M-GVW usually performs
much better than the original GVW algorithm when mutant pairs are found.
Besides, M-GVW is also compared with intrinsic Gr\"obner bases functions on
Maple, Singular and Magma. Due to the efficient routines from the M4RI library,
the experimental results show that M-GVW is very efficient
An efficient multi-core implementation of a novel HSS-structured multifrontal solver using randomized sampling
We present a sparse linear system solver that is based on a multifrontal
variant of Gaussian elimination, and exploits low-rank approximation of the
resulting dense frontal matrices. We use hierarchically semiseparable (HSS)
matrices, which have low-rank off-diagonal blocks, to approximate the frontal
matrices. For HSS matrix construction, a randomized sampling algorithm is used
together with interpolative decompositions. The combination of the randomized
compression with a fast ULV HSS factorization leads to a solver with lower
computational complexity than the standard multifrontal method for many
applications, resulting in speedups up to 7 fold for problems in our test
suite. The implementation targets many-core systems by using task parallelism
with dynamic runtime scheduling. Numerical experiments show performance
improvements over state-of-the-art sparse direct solvers. The implementation
achieves high performance and good scalability on a range of modern shared
memory parallel systems, including the Intel Xeon Phi (MIC). The code is part
of a software package called STRUMPACK -- STRUctured Matrices PACKage, which
also has a distributed memory component for dense rank-structured matrices
Zero CR-curvature equations for Levi degenerate hypersurfaces via Pocchiola's invariants
In our earlier articles we studied tube hypersurfaces in that
are 2-nondegenerate and uniformly Levi degenerate of rank 1. In particular, we
showed that the vanishing of the CR-curvature of such a hypersurface is
equivalent to the Monge equation with respect to one of the variables. In the
present paper we provide an alternative shorter derivation of this equation by
utilizing two invariants discovered by S. Pocchiola. We also investigate
Pocchiola's invariants in the rigid case and give a partial classification of
rigid 2-nondegenerate uniformly Levi degenerate of rank 1 hypersurfaces with
vanishing CR-curvature.Comment: arXiv admin note: text overlap with arXiv:1608.0291
Automated Design Space Exploration and Datapath Synthesis for Finite Field Arithmetic with Applications to Lightweight Cryptography
Today, emerging technologies are reaching astronomical proportions. For example, the Internet
of Things has numerous applications and consists of countless different devices using different
technologies with different capabilities. But the one invariant is their connectivity. Consequently,
secure communications, and cryptographic hardware as a means of providing them, are faced
with new challenges. Cryptographic algorithms intended for hardware implementations must be
designed with a good trade-off between implementation efficiency and sufficient cryptographic
strength. Finite fields are widely used in cryptography. Examples of algorithm design choices
related to finite field arithmetic are the field size, which arithmetic operations to use, how to
represent the field elements, etc. As there are many parameters to be considered and analyzed, an
automation framework is needed.
This thesis proposes a framework for automated design, implementation and verification of finite
field arithmetic hardware. The underlying motif throughout this work is “math meets hardware”.
The automation framework is designed to bring the awareness of underlying mathematical
structures to the hardware design flow. It is implemented in GAP, an open source computer algebra
system that can work with finite fields and has symbolic computation capabilities. The framework
is roughly divided into two phases, the architectural decisions and the automated design genera-
tion. The architectural decisions phase supports parameter search and produces a list of candidates.
The automated design generation phase is invoked for each candidate, and the generated VHDL
files are passed on to conventional synthesis tools. The candidates and their implementation results
form the design space, and the framework allows rapid design space exploration in a systematic
way. In this thesis, design space exploration is focused on finite field arithmetic.
Three distinctive features of the proposed framework are the structure of finite fields, tower field
support, and on the fly submodule generation. Each finite field used in the design is represented as
both a field and its corresponding vector space. It is easy for a designer to switch between fields
and vector spaces, but strict distinction of the two is necessary for hierarchical designs. When an
expression is defined over an extension field, the top-level module contains element signals and
submodules for arithmetic operations on those signals. The submodules are generated with
corresponding vector signals and the arithmetic operations are now performed on the coordinates.
For tower fields, the submodules are generated for the subfield operations, and the design is generated
in a top-down fashion. The binding of expressions to the appropriate finite fields or vector spaces
and a set of customized methods allow the on the fly generation of expressions for implementation
of arithmetic operations, and hence submodule generation.
In the light of NIST Lightweight Cryptography Project (LWC), this work focuses mainly on small
finite fields. The thesis illustrates the impact of hardware implementation results during the design
process of WAGE, a Round 2 candidate in the NIST LWC standardization competition. WAGE
is a hardware oriented authenticated encryption scheme. The parameter selection for WAGE was
aimed at balancing the security and hardware implementation area, using hardware implementation
results for many design decisions, for example field size, representation of field elements, etc.
In the proposed framework, the components of WAGE are used as an example to illustrate different
automation flows and demonstrate the design space exploration on a real-world algorithm
- …