14,608 research outputs found
Exploiting chordal structure in polynomial ideals: a Gr\"obner bases approach
Chordal structure and bounded treewidth allow for efficient computation in
numerical linear algebra, graphical models, constraint satisfaction and many
other areas. In this paper, we begin the study of how to exploit chordal
structure in computational algebraic geometry, and in particular, for solving
polynomial systems. The structure of a system of polynomial equations can be
described in terms of a graph. By carefully exploiting the properties of this
graph (in particular, its chordal completions), more efficient algorithms can
be developed. To this end, we develop a new technique, which we refer to as
chordal elimination, that relies on elimination theory and Gr\"obner bases. By
maintaining graph structure throughout the process, chordal elimination can
outperform standard Gr\"obner basis algorithms in many cases. The reason is
that all computations are done on "smaller" rings, of size equal to the
treewidth of the graph. In particular, for a restricted class of ideals, the
computational complexity is linear in the number of variables. Chordal
structure arises in many relevant applications. We demonstrate the suitability
of our methods in examples from graph colorings, cryptography, sensor
localization and differential equations.Comment: 40 pages, 5 figure
Generalizing input-driven languages: theoretical and practical benefits
Regular languages (RL) are the simplest family in Chomsky's hierarchy. Thanks
to their simplicity they enjoy various nice algebraic and logic properties that
have been successfully exploited in many application fields. Practically all of
their related problems are decidable, so that they support automatic
verification algorithms. Also, they can be recognized in real-time.
Context-free languages (CFL) are another major family well-suited to
formalize programming, natural, and many other classes of languages; their
increased generative power w.r.t. RL, however, causes the loss of several
closure properties and of the decidability of important problems; furthermore
they need complex parsing algorithms. Thus, various subclasses thereof have
been defined with different goals, spanning from efficient, deterministic
parsing to closure properties, logic characterization and automatic
verification techniques.
Among CFL subclasses, so-called structured ones, i.e., those where the
typical tree-structure is visible in the sentences, exhibit many of the
algebraic and logic properties of RL, whereas deterministic CFL have been
thoroughly exploited in compiler construction and other application fields.
After surveying and comparing the main properties of those various language
families, we go back to operator precedence languages (OPL), an old family
through which R. Floyd pioneered deterministic parsing, and we show that they
offer unexpected properties in two fields so far investigated in totally
independent ways: they enable parsing parallelization in a more effective way
than traditional sequential parsers, and exhibit the same algebraic and logic
properties so far obtained only for less expressive language families
Minimum Number of Probes for Brain Dynamics Observability
In this paper, we address the problem of placing sensor probes in the brain
such that the system dynamics' are generically observable. The system dynamics
whose states can encode for instance the fire-rating of the neurons or their
ensemble following a neural-topological (structural) approach, and the sensors
are assumed to be dedicated, i.e., can only measure a state at each time. Even
though the mathematical description of brain dynamics is (yet) to be
discovered, we build on its observed fractal characteristics and assume that
the model of the brain activity satisfies fractional-order dynamics.
Although the sensor placement explored in this paper is particularly
considering the observability of brain dynamics, the proposed methodology
applies to any fractional-order linear system. Thus, the main contribution of
this paper is to show how to place the minimum number of dedicated sensors,
i.e., sensors measuring only a state variable, to ensure generic observability
in discrete-time fractional-order systems for a specified finite interval of
time. Finally, an illustrative example of the main results is provided using
electroencephalogram (EEG) data.Comment: arXiv admin note: text overlap with arXiv:1507.0720
Numerical study on active and passive trailing edge morphing applied to a multi-MW wind turbine section
A progressive increasing in turbine dimension has characterized the technological development in offshore wind energy utilization. This aspect reflects on the growing in blade length and weight. For very large turbines, the standard control systems may not be optimal to give the best performance and the best vibratory load damping, keeping the condition of maximum energy production. For this reason, some new solutions have been proposed in research. One of these is the possibility of morphs the blade surface in an active way (increasing the performance in low wind region) or passive (load reduction) way.
In this work, we present a numerical study on the active and passive trailing edge morphing, applied to large wind turbines. In particular, the study focuses on the aerodynamic response of a midspan blade section, in terms of fluid structure interaction (FSI) and driven surface deformation.
We test the active system in a simple start-up procedure and the passive system in a power production with turbulent wind conditions, that is, two situations in which we expect these systems could improve the performance.
All the computations are carried out with a FSI code, which couples a 2D-CFD solver, a moving mesh solver (both implemented in OpenFOAM library) and a FEM solver.
We evaluate all the boundary conditions to apply in the section problem by simulating the 5MW NREL wind turbine with the NREL CAE-tools developed for wind turbine simulation
Consensus Computation in Unreliable Networks: A System Theoretic Approach
This work addresses the problem of ensuring trustworthy computation in a
linear consensus network. A solution to this problem is relevant for several
tasks in multi-agent systems including motion coordination, clock
synchronization, and cooperative estimation. In a linear consensus network, we
allow for the presence of misbehaving agents, whose behavior deviate from the
nominal consensus evolution. We model misbehaviors as unknown and unmeasurable
inputs affecting the network, and we cast the misbehavior detection and
identification problem into an unknown-input system theoretic framework. We
consider two extreme cases of misbehaving agents, namely faulty (non-colluding)
and malicious (Byzantine) agents. First, we characterize the set of inputs that
allow misbehaving agents to affect the consensus network while remaining
undetected and/or unidentified from certain observing agents. Second, we
provide worst-case bounds for the number of concurrent faulty or malicious
agents that can be detected and identified. Precisely, the consensus network
needs to be 2k+1 (resp. k+1) connected for k malicious (resp. faulty) agents to
be generically detectable and identifiable by every well behaving agent. Third,
we quantify the effect of undetectable inputs on the final consensus value.
Fourth, we design three algorithms to detect and identify misbehaving agents.
The first and the second algorithm apply fault detection techniques, and
affords complete detection and identification if global knowledge of the
network is available to each agent, at a high computational cost. The third
algorithm is designed to exploit the presence in the network of weakly
interconnected subparts, and provides local detection and identification of
misbehaving agents whose behavior deviates more than a threshold, which is
quantified in terms of the interconnection structure
Symmetric tensor decomposition
We present an algorithm for decomposing a symmetric tensor, of dimension n
and order d as a sum of rank-1 symmetric tensors, extending the algorithm of
Sylvester devised in 1886 for binary forms. We recall the correspondence
between the decomposition of a homogeneous polynomial in n variables of total
degree d as a sum of powers of linear forms (Waring's problem), incidence
properties on secant varieties of the Veronese Variety and the representation
of linear forms as a linear combination of evaluations at distinct points. Then
we reformulate Sylvester's approach from the dual point of view. Exploiting
this duality, we propose necessary and sufficient conditions for the existence
of such a decomposition of a given rank, using the properties of Hankel (and
quasi-Hankel) matrices, derived from multivariate polynomials and normal form
computations. This leads to the resolution of polynomial equations of small
degree in non-generic cases. We propose a new algorithm for symmetric tensor
decomposition, based on this characterization and on linear algebra
computations with these Hankel matrices. The impact of this contribution is
two-fold. First it permits an efficient computation of the decomposition of any
tensor of sub-generic rank, as opposed to widely used iterative algorithms with
unproved global convergence (e.g. Alternate Least Squares or gradient
descents). Second, it gives tools for understanding uniqueness conditions, and
for detecting the rank
Accurate and Efficient Expression Evaluation and Linear Algebra
We survey and unify recent results on the existence of accurate algorithms
for evaluating multivariate polynomials, and more generally for accurate
numerical linear algebra with structured matrices. By "accurate" we mean that
the computed answer has relative error less than 1, i.e., has some correct
leading digits. We also address efficiency, by which we mean algorithms that
run in polynomial time in the size of the input. Our results will depend
strongly on the model of arithmetic: Most of our results will use the so-called
Traditional Model (TM). We give a set of necessary and sufficient conditions to
decide whether a high accuracy algorithm exists in the TM, and describe
progress toward a decision procedure that will take any problem and provide
either a high accuracy algorithm or a proof that none exists. When no accurate
algorithm exists in the TM, it is natural to extend the set of available
accurate operations by a library of additional operations, such as , dot
products, or indeed any enumerable set which could then be used to build
further accurate algorithms. We show how our accurate algorithms and decision
procedure for finding them extend to this case. Finally, we address other
models of arithmetic, and the relationship between (im)possibility in the TM
and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl
- …