1,476 research outputs found
Extending substructure based iterative solvers to multiple load and repeated analyses
Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds
Symmetric extension in two-way quantum key distribution
We introduce symmetric extensions of bipartite quantum states as a tool for
analyzing protocols that distill secret key from quantum correlations. Whether
the correlations are coming from a prepare-and-measure quantum key distribution
scheme or from an entanglement-based scheme, the protocol has to produce
effective states without a symmetric extension in order to succeed. By
formulating the symmetric extension problem as a semidefinite program, we solve
the problem for Bell-diagonal states. Applying this result to the six-state and
BB84 schemes, we show that for the entangled states that cannot be distilled by
current key distillation procedures, the failure can be understood in terms of
a failure to break a symmetric extension.Comment: 11 pages, 2 figures; v2: published version, hyperlinked reference
Conditioning moments of singular measures for entropy optimization. I
In order to process a potential moment sequence by the entropy optimization
method one has to be assured that the original measure is absolutely continuous
with respect to Lebesgue measure. We propose a non-linear exponential transform
of the moment sequence of any measure, including singular ones, so that the
entropy optimization method can still be used in the reconstruction or
approximation of the original. The Cauchy transform in one variable, used for
this very purpose in a classical context by A.\ A.\ Markov and followers, is
replaced in higher dimensions by the Fantappi\`{e} transform. Several
algorithms for reconstruction from moments are sketched, while we intend to
provide the numerical experiments and computational aspects in a subsequent
article. The essentials of complex analysis, harmonic analysis, and entropy
optimization are recalled in some detail, with the goal of making the main
results more accessible to non-expert readers.
Keywords: Fantappi\`e transform; entropy optimization; moment problem; tube
domain; exponential transformComment: Submitted to Indagnationes Mathematicae, I. Gohberg Memorial issu
Social Heteronomy and Contrasting Economic Epistemology a Mathematical Approach
AbstractAn original socio-scientific theory is developed out of a contrast with the rationalist paradigm. This new worldview arises from the epistemology of unity of knowledge and its functional ontological implication of unity of the world-system. The Kantian epistemological meaning of heteronomy is shown to be one of the permanent socio-scientific problems of rationalism. The methodology is of the topological mathematical nature by virtue of the complex problem that inheres in the criticism of Kantian heteronomy and rationalism. The emergence of the new epistemological worldview of unity of knowledge and the world-system is formalized. Several theoretical constructs and applications of the episteme of unity of knowledge are pointed out in diverse fields
Structural History of Human SRGAP2 Proteins
This is the author accepted manuscript. The final version is available from Oxford University Press via the DOI in this record.We thank Adam Frost and Eckart Gundelfinger for valuable advice on the manuscript, Michaela Vogel, Lada Gevorkyan-Airapetov, Rinat Vasserman and Tomer Orevi for technical assistance, and Hadar Amartely and Mario Lebendiker for help with SEC-MALS experiments and analysis. Thanks to the staff of beamlines ID14, ID23, and ID29 of ESRF, and the staff of BESSY II BL14.1. This work was supported by funds from the ISF (Grants no. 182/10 and 1425/15 to Y.O.) and BSF (Grant no. 2013310, to Y.O. and Adam Frost) as well as by the DFG grants QU116/6-2 to B.Q. and KE685/4-2 to M.M.K
A CGE-Analysis of Energy Policies Considering Labor Market Imperfections and Technology Specifications
The paper establishes a CGE/MPSGE model for evaluating energy policy measures with emphasis on their employment impacts. It specifies a dual labor market with respect to qualification, two different mechanisms for skill specific unemployment, and a technology detailed description of electricity generation. Non clearing of the dual labor market is modeled via minimum wage constraints and via wage curves. The model is exemplarily applied for the analysis of capital subsidies on the application of technologies using renewable energy sources. Quantitative results highlight that subsidies on these technologies do not automatically lead to a significant reduction in emissions. Moreover, if emission reductions are achieved these might actually partly result from negative growth effects induced by the promotion of cost inefficient technologies. Inefficiencies in the energy system increase unemployment for both skilled and unskilled labor.CGE, Energy Economic Analysis, Employment Impact, Choice of Technology
Symmetry as Bias: Rediscovering Special Relativity
This paper describes a rational reconstruction of Einstein's discovery of special relativity, validated through an implementation: the Erlanger program. Einstein's discovery of special relativity revolutionized both the content of physics and the research strategy used by theoretical physicists. This research strategy entails a mutual bootstrapping process between a hypothesis space for biases, defined through different postulated symmetries of the universe, and a hypothesis space for physical theories. The invariance principle mutually constrains these two spaces. The invariance principle enables detecting when an evolving physical theory becomes inconsistent with its bias, and also when the biases for theories describing different phenomena are inconsistent. Structural properties of the invariance principle facilitate generating a new bias when an inconsistency is detected. After a new bias is generated. this principle facilitates reformulating the old, inconsistent theory by treating the latter as a limiting approximation. The structural properties of the invariance principle can be suitably generalized to other types of biases to enable primal-dual learning
Reduced Order Modeling based Inexact FETI-DP solver for lattice structures
This paper addresses the overwhelming computational resources needed with
standard numerical approaches to simulate architected materials. Those
multiscale heterogeneous lattice structures gain intensive interest in
conjunction with the improvement of additive manufacturing as they offer, among
many others, excellent stiffness-to-weight ratios. We develop here a dedicated
HPC solver that benefits from the specific nature of the underlying problem in
order to drastically reduce the computational costs (memory and time) for the
full fine-scale analysis of lattice structures. Our purpose is to take
advantage of the natural domain decomposition into cells and, even more
importantly, of the geometrical and mechanical similarities among cells. Our
solver consists in a so-called inexact FETI-DP method where the local,
cell-wise operators and solutions are approximated with reduced order modeling
techniques. Instead of considering independently every cell, we end up with
only few principal local problems to solve and make use of the corresponding
principal cell-wise operators to approximate all the others. It results in a
scalable algorithm that saves numerous local factorizations. Our solver is
applied for the isogeometric analysis of lattices built by spline composition,
which offers the opportunity to compute the reduced basis with macro-scale
data, thereby making our method also multiscale and matrix-free. The solver is
tested against various 2D and 3D analyses. It shows major gains with respect to
black-box solvers; in particular, problems of several millions of degrees of
freedom can be solved with a simple computer within few minutes.Comment: 30 pages, 12 figures, 2 table
- …