1,572,478 research outputs found
A family of higher-order single layer plate models meeting -- requirements for arbitrary laminates
In the framework of displacement-based equivalent single layer (ESL) plate
theories for laminates, this paper presents a generic and automatic method to
extend a basis higher-order shear deformation theory (polynomial,
trigonometric, hyperbolic, ...) to a multilayer higher-order shear
deformation theory. The key idea is to enhance the description of the
cross-sectional warping: the odd high-order function of the basis model
is replaced by one odd and one even high-order function and including the
characteristic zig-zag behaviour by means of piecewise linear functions. In
order to account for arbitrary lamination schemes, four such piecewise
continuous functions are considered. The coefficients of these four warping
functions are determined in such a manner that the interlaminar continuity as
well as the homogeneity conditions at the plate's top and bottom surfaces are
{\em a priori} exactly verified by the transverse shear stress field. These
ESL models all have the same number of DOF as the original basis HSDT.
Numerical assessments are presented by referring to a strong-form Navier-type
solution for laminates with arbitrary stacking sequences as well for a sandwich
plate. In all practically relevant configurations for which laminated plate
models are usually applied, the results obtained in terms of deflection,
fundamental frequency and local stress response show that the proposed zig-zag
models give better results than the basis models they are issued from
Longitudinal response functions of 3H and 3He
Trinucleon longitudinal response functions R_L(q,omega) are calculated for q
values up to 500 MeV/c. These are the first calculations beyond the threshold
region in which both three-nucleon (3N) and Coulomb forces are fully included.
We employ two realistic NN potentials (configuration space BonnA, AV18) and two
3N potentials (UrbanaIX, Tucson-Melbourne). Complete final state interactions
are taken into account via the Lorentz integral transform technique. We study
relativistic corrections arising from first order corrections to the nuclear
charge operator. In addition the reference frame dependence due to our
non-relativistic framework is investigated. For q less equal 350 MeV/c we find
a 3N force effect between 5 and 15 %, while the dependence on other theoretical
ingredients is small. At q greater equal 400 MeV/c relativistic corrections to
the charge operator and effects of frame dependence, especially for large
omega, become more important. In comparison with experimental data there is
generally a rather good agreement. Exceptions are the responses at excitation
energies close to threshold, where there exists a large discrepancy with
experiment at higher q. Concerning the effect of 3N forces there are a few
cases, in particular for the R_L of 3He, where one finds a much improved
agreement with experiment if 3N forces are included.Comment: 26 pages, 9 figure
Higher-order complexity in analysis
International audienceWe present ongoing work on the development of complexity theory in analysis. Kawamura and Cook recently showed how to carry out complexity theory on the space C[0,1] of continuous real functions on the unit interval. It is done, as in computable analysis, by representing objects by first-order functions (from finite words to finite words, say) and by measuring the complexity of a second-order functional in terms of second-order polynomials. We prove that this framework cannot be directly applied to spaces that are not -compact. However, representing objects by higher-order functions (over finite words, say) makes it possible to carry out complexity theory on such spaces: for this purpose we develop the complexity of higher-order functionals. At orders above 3, our class of polynomial-time computable functionals strictly contains the class BFF of Buss, Cook and Urquhart
Safe Transferable Regions
There is an increasing interest in alternative memory management schemes that seek to combine the convenience of garbage collection and the performance of manual memory management in a single language framework. Unfortunately, ensuring safety in presence of manual memory management remains as great a challenge as ever. In this paper, we present a C#-like object-oriented language called Broom that uses a combination of region type system and lightweight runtime checks to enforce safety in presence of user-managed memory regions called transferable regions. Unsafe transferable regions have been previously used to contain the latency due to unbounded GC pauses. Our approach shows that it is possible to restore safety without compromising on the benefits of transferable regions. We prove the type safety of Broom in a formal framework that includes its C#-inspired features, such as higher-order functions and generics. We complement our type system with a type inference algorithm, which eliminates the need for programmers to write region annotations on types. The inference algorithm has been proven sound and relatively complete. We describe a prototype implementation of the inference algorithm, and our experience of using it to enforce memory safety in dataflow programs
Data-driven Structured Realization
We present a framework for constructing structured realizations of linear dynamical systems having transfer functions of the form where are prescribed functions that specify the surmised structure of the model. Our construction is data-driven in the sense that an interpolant is derived entirely from measurements of a transfer function. Our approach extends the Loewner realization framework to more general system structure that includes second-order (and higher) systems as well as systems with internal delays. Numerical examples demonstrate the advantages of this approach
Ăber die Implementierung der verallgemeinerten Finite-Element-Methode
The Generalized Finite Element Method (GFEM) combines desirable features of the standard Finite Element Method and the meshless methods. The key difference of the GFEM compared to the traditional FEM is the construction of the ansatz space. Each node of the finite element mesh carries a number of ansatz functions, expressed in terms of the global coordinate system. Those ansatz functions are multiplied by a partition of unity and serve as element ansatz functions in the patch constituted by the elements incident at the node. Using this technique to create the ansatz space allows for arbitrary ansatz functions. C0-continuity is enforced by construction. The ansatz is enriched using analytical functions or numerical approximations derived from side calculations containing a-priori knowledge of the solution close to singularities. The performance of GFEM with a higher order of polynomial ansatz functions is compared to traditional h-, p- and hp-extensions of the FEM. Most of the efficient solvers, e.g. multi-grid or cg, cannot be applied to the semi-definite systems resulting from a GFEM discretization. Several solving strategies are evaluated for higher order GFEM. The work concludes with a description of the implementation of the GFEM with a flexible object-oriented framework using C++.Die verallgemeinerte Finite-Element-Methode (GFEM) kombiniert Vorteile der klassischen Finite-Element-Methode mit Vorteilen der netzfreien Methoden. Hauptunterschied beim Vergleich der GFEM mit der FEM ist die Konstruktions des Ansatzes. Jeder Knoten des FE-Netzes trĂ€gt eine Anzahl an Ansatzfunktionen, die in globalen Koordinaten ausgedrĂŒckt werden. Diese Ansatzfunktionen werden mit einer Partition of Unity multipliziert und dienen als Elementansatzfunktionen fĂŒr den Patch, der aus den angrenzenden Elementen des Knotens gebildet wird. Durch diese Art des Ansatzes wird die C0-Stetikeit fĂŒr beliebige Ansatzfunktionen gewĂ€hrleistet. Der Ansatz wird mit analytischen Funktionen und numerischen NĂ€herungsrechnungen angereichert und enthĂ€lt somit a-priori Wissen der Lösung in der NĂ€he von SingularitĂ€ten. Die Performance der GFEM mit AnsĂ€tzen höhere Ordnung wird mit klassischen h-, p- und hp-Diskretisierungen der FEM verglichen. Die meisten effizienten Löser, z.B. Multi Grid Verfahren oder die CG-Methode, können nicht fĂŒr das semi-definite Gleichungssystem verwendet werden, dass aus der GFEM-Diskretisierung resultiert. Verschiedene Lösungsstrategien fĂŒr GFEM-Diskretisierungen höhere Ordnungen werden untersucht. Die Arbeit schlieĂt mit einer Beschreibung der Implementierung der Methode in Form eines Objekt-orientierten Frameworks in C++ ab
Recommended from our members
Higher-Order Calculations in Quantum Chromodynamics
In this thesis, several techniques and advances in higher-order Quantum Chromodynamics (QCD) calculations are presented. There is a particular focus on 2-loop 5-point massless QCD amplitudes, which are currently at the frontier of higher-order QCD calculations.
Firstly, we study the Brodsky-Lepage-Mackenzie/Principle of Maximum Conformality (BLM/PMC) method for setting the renormalisation scale, Ό_R, in higher-order QCD calculations. We identify three ambiguities in the BLM/PMC procedure and study their numerical impact using the example of the total cross-section for top-pair production at Next-to-Next-to-Leading Order (NNLO) in QCD. The numerical impact of these ambiguities on the BLM/PMC prediction for the cross-section is found to be comparable to the impact of the choice of Ό_R in the conventional scale-setting approach.
Secondly, we introduce a novel strategy for solving integration-by-parts (IBP) identities, which are widely used in the computation of multi-loop QCD amplitudes. We implement the strategy in an efficient C++ program and hence solve the IBP identities needed for the computation of any planar 2-loop 5-point massless amplitude in QCD. We also derive representative results for the most complicated non-planar family of integrals.
Thirdly, we present an automated computational framework to reduce 2-loop 5-point massless amplitudes to a basis of pentagon functions. It uses finite-field evaluation and interpolation techniques, as well as the aforementioned analytical IBP results. We use this to calculate the leading-colour 2-loop QCD amplitude for qqÌâγγγ and then compute the NNLO QCD corrections to 3-photon production at the LHC. This is the first NNLO QCD calculation for a 2â3 process. We compare our predictions with the available 8 TeV measurements from the ATLAS collaboration and we find that the inclusion of the NNLO corrections eliminates the existing significant discrepancy with respect to NLO QCD predictions, paving the way for precision phenomenology in this process
- âŠ