8,027 research outputs found
Demodulation of Spatial Carrier Images: Performance Analysis of Several Algorithms Using a Single Image
http://link.springer.com/article/10.1007%2Fs11340-013-9741-6#Optical full-field techniques have a great importance in modern experimental mechanics. Even if they are reasonably spread among the university laboratories, their diffusion in industrial companies remains very narrow for several reasons, especially a lack of metrological performance assessment. A full-field measurement can be characterized by its resolution, bias, measuring range, and by a specific quantity, the spatial resolution. The present paper proposes an original procedure to estimate in one single step the resolution, bias and spatial resolution for a given operator (decoding algorithms such as image correlation, low-pass filters, derivation tools ...). This procedure is based on the construction of a particular multi-frequential field, and a Bode diagram representation of the results. This analysis is applied to various phase demodulating algorithms suited to estimate in-plane displacements.GDR CNRS 2519 “Mesures de Champs et Identification en Mécanique des Solide
Patient-specific simulation of stent-graft deployment within an abdominal aortic aneurysm
In this study, finite element analysis is used to simulate the surgical
deployment procedure of a bifurcated stent-graft on a real patient's arterial
geometry. The stent-graft is modeled using realistic constitutive properties
for both the stent and most importantly for the graft. The arterial geometry is
obtained from pre-operative imaging exam. The obtained results are in good
agreement with the post-operative imaging data. As the whole computational time
was reduced to less than 2 hours, this study constitutes an essential step
towards predictive planning simulations of aneurysmal endovascular surger
An analytic comparison of regularization methods for Gaussian Processes
Gaussian Processes (GPs) are a popular approach to predict the output of a
parameterized experiment. They have many applications in the field of Computer
Experiments, in particular to perform sensitivity analysis, adaptive design of
experiments and global optimization. Nearly all of the applications of GPs
require the inversion of a covariance matrix that, in practice, is often
ill-conditioned. Regularization methodologies are then employed with
consequences on the GPs that need to be better understood.The two principal
methods to deal with ill-conditioned covariance matrices are i) pseudoinverse
and ii) adding a positive constant to the diagonal (the so-called nugget
regularization).The first part of this paper provides an algebraic comparison
of PI and nugget regularizations. Redundant points, responsible for covariance
matrix singularity, are defined. It is proven that pseudoinverse
regularization, contrarily to nugget regularization, averages the output values
and makes the variance zero at redundant points. However, pseudoinverse and
nugget regularizations become equivalent as the nugget value vanishes. A
measure for data-model discrepancy is proposed which serves for choosing a
regularization technique.In the second part of the paper, a distribution-wise
GP is introduced that interpolates Gaussian distributions instead of data
points. Distribution-wise GP can be seen as an improved regularization method
for GPs
Additive Kernels for Gaussian Process Modeling
Gaussian Process (GP) models are often used as mathematical approximations of
computationally expensive experiments. Provided that its kernel is suitably
chosen and that enough data is available to obtain a reasonable fit of the
simulator, a GP model can beneficially be used for tasks such as prediction,
optimization, or Monte-Carlo-based quantification of uncertainty. However, the
former conditions become unrealistic when using classical GPs as the dimension
of input increases. One popular alternative is then to turn to Generalized
Additive Models (GAMs), relying on the assumption that the simulator's response
can approximately be decomposed as a sum of univariate functions. If such an
approach has been successfully applied in approximation, it is nevertheless not
completely compatible with the GP framework and its versatile applications. The
ambition of the present work is to give an insight into the use of GPs for
additive models by integrating additivity within the kernel, and proposing a
parsimonious numerical method for data-driven parameter estimation. The first
part of this article deals with the kernels naturally associated to additive
processes and the properties of the GP models based on such kernels. The second
part is dedicated to a numerical procedure based on relaxation for additive
kernel parameter estimation. Finally, the efficiency of the proposed method is
illustrated and compared to other approaches on Sobol's g-function
Population balances in case of crossing characteristic curves: Application to T-cells immune response
The progression of a cell population where each individual is characterized
by the value of an internal variable varying with time (e.g. size, weight, and
protein concentration) is typically modeled by a Population Balance Equation, a
first order linear hyperbolic partial differential equation. The
characteristics described by internal variables usually vary monotonically with
the passage of time. A particular difficulty appears when the characteristic
curves exhibit different slopes from each other and therefore cross each other
at certain times. In particular such crossing phenomenon occurs during T-cells
immune response when the concentrations of protein expressions depend upon each
other and also when some global protein (e.g. Interleukin signals) is also
involved which is shared by all T-cells. At these crossing points, the linear
advection equation is not possible by using the classical way of hyperbolic
conservation laws. Therefore, a new Transport Method is introduced in this
article which allowed us to find the population density function for such
processes. The newly developed Transport method (TM) is shown to work in the
case of crossing and to provide a smooth solution at the crossing points in
contrast to the classical PDF techniques.Comment: 18 pages, 10 figure
A General Framework for Representing, Reasoning and Querying with Annotated Semantic Web Data
We describe a generic framework for representing and reasoning with annotated
Semantic Web data, a task becoming more important with the recent increased
amount of inconsistent and non-reliable meta-data on the web. We formalise the
annotated language, the corresponding deductive system and address the query
answering problem. Previous contributions on specific RDF annotation domains
are encompassed by our unified reasoning formalism as we show by instantiating
it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we
provide a generic method for combining multiple annotation domains allowing to
represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the
development of a query language -- AnQL -- that is inspired by SPARQL,
including several features of SPARQL 1.1 (subqueries, aggregates, assignment,
solution modifiers) along with the formal definitions of their semantics
Mechanical identification of layer-specific properties of mouse carotid arteries using 3D-DIC and a hyperelastic anisotropic constitutive model
The role of mechanics is known to be of primary order in many arterial
diseases; however, determining mechanical properties of arteries remains a
challenge. This paper discusses the identifiability of the passive mechanical
properties of a mouse carotid artery, taking into account the orientation of
collagen fibres in the medial and adventitial layers. On the basis of 3D
digital image correlation measurements of the surface strain during an
inflation/extension test, an inverse identification method is set up. It
involves a 3D finite element mechanical model of the mechanical test and an
optimisation algorithm. A two-layer constitutive model derived from the
Holzapfel model is used, with five and then seven parameters. The
five-parameter model is successfully identified providing layer-specific fibre
angles. The seven-parameter model is over parameterised, yet it is shown that
additional data from a simple tension test make the identification of refined
layer-specific data reliable.Comment: PB-CMBBE-15.pd
Direct strain and slope measurement using 3D DSPSI
This communication presents a new implementation of DSPSI. Its main features
are 1. an advanced model taking into account the beam divergence, 2. the
coupling with a surface shape measurement in order to generalize DSPSI to
nonplanar surfaces 3. the use of small shear distance made possible using a
precise measurement procedure. A first application on a modified Iosipescu
shear test is presented and compared to classical DIC measurements
Bayesian Identification of Elastic Constants in Multi-Directional Laminate from Moir\'e Interferometry Displacement Fields
The ply elastic constants needed for classical lamination theory analysis of
multi-directional laminates may differ from those obtained from unidirectional
laminates because of three dimensional effects. In addition, the unidirectional
laminates may not be available for testing. In such cases, full-field
displacement measurements offer the potential of identifying several material
properties simultaneously. For that, it is desirable to create complex
displacement fields that are strongly influenced by all the elastic constants.
In this work, we explore the potential of using a laminated plate with an
open-hole under traction loading to achieve that and identify all four ply
elastic constants (E 1, E 2, 12, G 12) at once. However, the accuracy of the
identified properties may not be as good as properties measured from individual
tests due to the complexity of the experiment, the relative insensitivity of
the measured quantities to some of the properties and the various possible
sources of uncertainty. It is thus important to quantify the uncertainty (or
confidence) with which these properties are identified. Here, Bayesian
identification is used for this purpose, because it can readily model all the
uncertainties in the analysis and measurements, and because it provides the
full coupled probability distribution of the identified material properties. In
addition, it offers the potential to combine properties identified based on
substantially different experiments. The full-field measurement is obtained by
moir\'e interferometry. For computational efficiency the Bayesian approach was
applied to a proper orthogonal decomposition (POD) of the displacement fields.
The analysis showed that the four orthotropic elastic constants are determined
with quite different confidence levels as well as with significant correlation.
Comparison with manufacturing specifications showed substantial difference in
one constant, and this conclusion agreed with earlier measurement of that
constant by a traditional four-point bending test. It is possible that the POD
approach did not take full advantage of the copious data provided by the full
field measurements, and for that reason that data is provided for others to use
(as on line material attached to the article)
Formal verification of a software countermeasure against instruction skip attacks
Fault attacks against embedded circuits enabled to define many new attack
paths against secure circuits. Every attack path relies on a specific fault
model which defines the type of faults that the attacker can perform. On
embedded processors, a fault model consisting in an assembly instruction skip
can be very useful for an attacker and has been obtained by using several fault
injection means. To avoid this threat, some countermeasure schemes which rely
on temporal redundancy have been proposed. Nevertheless, double fault injection
in a long enough time interval is practical and can bypass those countermeasure
schemes. Some fine-grained countermeasure schemes have also been proposed for
specific instructions. However, to the best of our knowledge, no approach that
enables to secure a generic assembly program in order to make it fault-tolerant
to instruction skip attacks has been formally proven yet. In this paper, we
provide a fault-tolerant replacement sequence for almost all the instructions
of the Thumb-2 instruction set and provide a formal verification for this fault
tolerance. This simple transformation enables to add a reasonably good security
level to an embedded program and makes practical fault injection attacks much
harder to achieve
- …