1,349 research outputs found
Non Abelian TQFT and scattering of self dual field configuration
A non-abelian topological quantum field theory describing the scattering of
self-dual field configurations over topologically non-trivial Riemann surfaces,
arising from the reduction of 4-dim self-dual Yang-Mills fields, is introduced.
It is shown that the phase space of the theory can be exactly quantized in
terms of the space of holomorphic structures over stable vector bundles of
degree zero over Riemann surfaces. The Dirac monopoles are particular static
solutions of the field equations. Its relation to topological gravity is
discussed.Comment: 13 pages, Late
On Auxiliary Fields in BF Theories
We discuss the structure of auxiliary fields for non-Abelian BF theories in
arbitrary dimensions. By modifying the classical BRST operator, we build the
on-shell invariant complete quantum action. Therefore, we introduce the
auxiliary fields which close the BRST algebra and lead to the invariant
extension of the classical action.Comment: 7 pages, minor changes, typos in equations corrected and
acknowledgements adde
Bringing the power of dynamic languages to hardware control systems
Hardware control systems are normally programmed using high-performance languages like C or C++ and increasingly also Java. All these languages are strongly typed and compiled which brings usually good performance but at the cost of a longer development and testing cycle and the need for more programming expertise. Dynamic languages which were long thought to be too slow and not powerful enough for control purposes are, thanks to modern powerful computers and advanced implementation techniques, fast enough for many of these tasks. We present examples from the LHCb Experiment Control System (ECS), which is based on a commercial SCADA software. We have successfully used Python to integrate hardware devices into the ECS. We present the necessary lightweight middle-ware we have developed, including examples for controlling hardware and software devices. We also discuss the development cycle, tools used and compare the effort to traditional solutions
Dimensional hyper-reduction of nonlinear finite element models via empirical cubature
We present a general framework for the dimensional reduction, in terms of number of degrees of freedom as well as number of integration points (“hyper-reduction”), of nonlinear parameterized finite element (FE) models. The reduction process is divided into two sequential stages. The first stage consists in a common Galerkin projection onto a reduced-order space, as well as in the condensation of boundary conditions and external forces. For the second stage (reduction in number of integration points), we present a novel cubature scheme that efficiently determines optimal points and associated positive weights so that the error in integrating reduced internal forces is minimized. The distinguishing features of the proposed method are: (1) The minimization problem is posed in terms of orthogonal basis vector (obtained via a partitioned Singular Value Decomposition) rather that in terms of snapshots of the integrand. (2) The volume of the domain is exactly integrated. (3) The selection algorithm need not solve in all iterations a nonnegative least-squares problem to force the positiveness of the weights. Furthermore, we show that the proposed method converges to the absolute minimum (zero integration error) when the number of selected points is equal to the number of internal force modes included in the objective function. We illustrate this model reduction methodology by two nonlinear, structural examples (quasi-static bending and resonant vibration of elastoplastic composite plates). In both examples, the number of integration points is reduced three order of magnitudes (with respect to FE analyses) without significantly sacrificing accurac
High-performance model reduction procedures in multiscale simulations
Technological progress and discovery and mastery of increasingly sophisticated
structural materials have been inexorably tied together since the dawn
of history. In the present era — the so-called Space Age —-, the prevailing
trend is to design and create new materials, or improved existing ones, by
meticulously altering and controlling structural features that span across all
types of length scales: the ultimate aim is to achieve macroscopic proper-
ties (yield strength, ductility, toughness, fatigue limit . . . ) tailored to given
practical applications. Research efforts in this aspect range in complexity
from the creation of structures at the scale of single atoms and molecules —
the realm of nanotechnology —, to the more mundane, to the average civil
and mechanical engineers, development of structural materials by changing
the composition, distribution, size and topology of their constituents at the
microscopic/mesoscopic level (composite materials and porous metals, for
instance)
Nanotechnology measurements of the Young's modulus of polymeric materials
Making use of atomic force microscopy (AFM) —known as the state-of-the-art technology for handling matter on an atomic and molecular scale—, this paper describes the use of a nanotechnology technique for characterizing properties of polymeric materials. AFM measurement on two materials (polyamide and polystyrene) allowed to compare the performance of two distinct multi-asperity adhesion models based on the JKR (Johnson-Kendall-Robert) and DMT (Derajaguin- Muller-Toporov) theories, when assessing the Young’s Modulus (modulus of elasticity) of the investigated materials. Experimental results confirm that the JKR model processed through a MatLab algorithm produces more reliable results of the Young’s Modulus than the DMT model built-in in the AFM software
Continuum approach to computational multiscale modeling of propagating fracture
A new approach to two-scale modeling of propagating fracture, based on computational homogenization (FE2), is presented. The specific features of the approach are: a) a continuum setting for representation of the fracture at both scales based on the Continuum Strong Discontinuity Approach (CSDA), and b) the use, for the considered non-smooth (discontinuous) problem, of the same computational homogenization framework than for classical smooth cases. As a key issue, the approach retrieves a characteristic length computed at the lower scale, which is exported to the upper one and used therein as a regularization parameter for a propagating strong discontinuity kinematics. This guarantees the correct transfer of fracture energy between scales and the proper dissipation at the upper scale. Representative simulations show that the resulting formulation provides consistent results, which are objective with respect to, both, size and bias of the upper-scale mesh, and with respect to the size of the lower-scale RVE/failure cell, as well as the capability to model propagating cracks at the upper scale, in combination with crack-path-field and strain injection techniques. The continuum character of the approach confers to the formulation a minimal invasive character, with respect to standard procedures for multi-scale computational homogenizatio
Global sensitivity analysis of leaf-canopy-atmosphere RTMs: Implications for biophysical variables retrieval from top-of-atmosphere radiance data
Knowledge of key variables driving the top of the atmosphere (TOA) radiance over a vegetated surface is an important step to derive biophysical variables from TOA radiance data, e.g., as observed by an optical satellite. Coupled leaf-canopy-atmosphere Radiative Transfer Models (RTMs) allow linking vegetation variables directly to the at-sensor TOA radiance measured. Global Sensitivity Analysis (GSA) of RTMs enables the computation of the total contribution of each input variable to the output variance. We determined the impacts of the leaf-canopy-atmosphere variables into TOA radiance using the GSA to gain insights into retrievable variables. The leaf and canopy RTM PROSAIL was coupled with the atmospheric RTM MODTRAN5. Because of MODTRAN's computational burden and GSA's demand for many simulations, we first developed a surrogate statistical learning model, i.e., an emulator, that allows approximating RTM outputs through a machine learning algorithm with low computation time. A Gaussian process regression (GPR) emulator was used to reproduce lookup tables of TOA radiance as a function of 12 input variables with relative errors of 2.4%. GSA total sensitivity results quantified the driving variables of emulated TOA radiance along the 400-2500 nm spectral range at 15 cm-1 (between 0.3-9 nm); overall, the vegetation variables play a more dominant role than atmospheric variables. This suggests the possibility to retrieve biophysical variables directly from at-sensor TOA radiance data. Particularly promising are leaf chlorophyll content, leaf water thickness and leaf area index, as these variables are the most important drivers in governing TOA radiance outside the water absorption regions. A software framework was developed to facilitate the development of retrieval models from at-sensor TOA radiance data. As a proof of concept, maps of these biophysical variables have been generated for both TOA (L1C) and bottom-of-atmosphere (L2A) Sentinel-2 data by means of a hybrid retrieval scheme, i.e., training GPR retrieval algorithms using the RTM simulations. Obtained maps from L1C vs L2A data are consistent, suggesting that vegetation properties can be directly retrieved from TOA radiance data given a cloud-free sky, thus without the need of an atmospheric correction
Reduced order modeling strategies for computational multiscale fracture
The paper proposes some new computational strategies for affordably solving multiscale fracture problems through a FE2 approach. To take into account the mechanical effects induced by fracture at the microstructure level the Representative Volume Element (RVE), assumed constituted by an elastic matrix and inclusions, is endowed with a large set of cohesive softening bands providing a good representation of the possible microstructure crack paths. The RVE response is then homogenized in accordance with a model previously developed by the authors and upscaled to the macro-scale level as a continuum stress–strain constitutive equation, which is then used in a conventional framework of a finite element modeling of propagating fracture. For reduced order modeling (ROM) purposes, the RVE boundary value problem is first formulated in displacement fluctuations and used, via the Proper Orthogonal Decomposition (POD), to find a low-dimension space for solving the reduced problem. A domain separation strategy is proposed as a first technique for model order reduction: unconventionally, the low-dimension space is spanned by a basis in terms of fluctuating strains, as primitive kinematic variables, instead of the conventional formulation in terms of displacement fluctuations. The RVE spatial domain is then decomposed into a regular domain (made of the matrix and the inclusions) and a singular domain (constituted by cohesive bands), the required RVE boundary conditions are rephrased in terms of strains and imposed via Lagrange multipliers in the corresponding variational problem. Specific low-dimensional strain basis is then derived, independently for each domain, via the POD of the corresponding strain snapshots. Next step consists of developing a hyper-reduced model (HPROM). It is based on a second proposed technique, the Reduced Optimal Quadrature (ROQ) which, again unconventionally, is determined through optimization of the numerical integration of the primitive saddle-point problem arising from the RVE problem, rather than its derived variational equations, and substitutes the conventional Gauss quadrature. The ROQ utilizes a very reduced number of, optimally placed, sampling points, the corresponding weights and placements being evaluated through a greedy algorithm. The resulting low-dimensional and reduced-quadrature variational problem translates into very relevant savings on the computational cost and high computational speed-ups. Particular attention is additionally given to numerical tests and performance evaluations of the new hyper-reduced methodology, by “a-priori” and “a-posteriori” error assessments. Moreover, for the purposes of validation of the present techniques, a real structural problem exhibiting propagating fracture at two-scales is modeled on the basis of the strain injection-based multiscale approach previously developed by the authors. The performance of the proposed strategy, in terms of speed-up vs. error, is deeply analyzed and reporte
- …