1,592 research outputs found
A BAYESIAN FRAMEWORK FOR STRUCTURAL HEALTH MANAGEMENT USING ACOUSTIC EMISSION MONITORING AND PERIODIC INSPECTIONS
Many aerospace and civil infrastructures currently in service are at or beyond their design service-life limit. The ability to assess and predict their state of damage is critical in ensuring the structural integrity of such aging structures. The empirical models used for crack growth prediction suffer from various uncertainties; these models are often based on idealized theories and simplistic assumptions and may fail to capture the underlying physics of the complex failure mechanisms. The other source of uncertainty is the scarcity of relevant material-level test data required to estimate the parameters of empirical models.
To avoid in-service failure, the structures must be inspected routinely to ensure no damage of significant size is present in the structure. Currently, the structure has to be taken off line and partly disassembled to expose the critical areas for nondestructive inspection (NDI). This is an expensive and time-consuming process.
Structural health monitoring (SHM) is an emerging research area for online assessment of structural integrity using appropriate NDI technology. SHM could have a major contribution to the structural diagnosis and prognosis.
Empirical models, offline periodic inspections and online SHM systems can each provide an independent assessment of the structural integrity; in this research, a novel structural health management framework is proposed in which the Bayesian knowledge fusion technique is used to combine the information from all sources mentioned above in a systematic manner.
This work focuses on monitoring fatigue crack growth in metallic structures using acoustic emission (AE) technology. Fatigue crack growth tests with real-time acoustic emissions monitoring are conducted on CT specimens made of 7075 aluminum. Proper filtration of the resulting AE signals reveals a log-linear relationship between fracture parameters (da/dN and ΔK ) and select AE features; a flexible statistical model is developed to describe the relationship between these parameters.
Bayesian regression technique is used to estimate the model parameters using experimental data. The model is then used to calculate two important quantities that can be used for structural health management: (a) an AE-based instantaneous damage severity index, and (b) an AE-based estimate of the crack size distribution at a given point in time, assuming a known initial crack size distribution.
Finally, recursive Bayesian estimation is used for online integration of the structural health assessment information obtained from various sources mentioned above. The evidence used in Bayesian updating can be observed crack sizes and/or crack growth rate observations. The outcome of this approach is updated crack size distribution as well as updated model parameters. The model with updated parameters is then used for prognosis given an assumed future usage profile
What can we learn from ancient sales ledgers?
Victorian era customer purchase records from a London tailor reveal a close fit between actual and predicted buying using the stochastic NBD model. Within this conceptual framework, novel data generates useful results, in this case showing that the buying patterns of high-end customers in the 19th century are familiar and comparable to 21st century counterparts. This is useful in categories that lack longitudinal data, such as many luxury goods. The first contribution of this research is to show that luxury goods consumer culture is steady over time. Second, the NBD model can be used to predict market penetration growth. Third, the results indicate that marketers should seek to increase the total number of customers rather than just focus on heavy ones and finally the NBD model can serve as a benchmark tool to evaluate any real change in buying behavior (as opposed to stochastic change), caused by marketing activities
Computation of bounds for anchor problems in limit analysis and decomposition techniques
Numerical techniques for the computation of strict bounds in limit analyses
have been developed for more than thirty years. The efficiency of these techniques
have been substantially improved in the last ten years, and have been successfully
applied to academic problems, foundations and excavations. We here extend
the theoretical background to problems with anchors, interface conditions, and
joints. Those extensions are relevant for the analysis of retaining and anchored walls,
which we study in this work. The analysis of three-dimensional domains remains
as yet very scarce. From the computational standpoint, the memory requirements
and CPU time are exceedingly prohibitive when mesh adaptivity is employed. For
this reason, we also present here the application of decomposition techniques to
the optimisation problem of limit analysis. We discuss the performance of different
methodologies adopted in the literature for general optimisation problems, such as
primal and dual decomposition, and suggest some strategies that are suitable for the
parallelisation of large three-dimensional problems. The propo sed decomposition
techniques are tested against representative problems.Peer ReviewedPreprin
Excess water production diagnosis in oil fields using ensemble classifiers
In hydrocarbon production, more often than not, oil is produced commingled with water. As long as the water production rate is below the economic level of water/oil ratio (WOR), no water shutoff treatment is needed. Problems arise when water production rate exceeds the WOR economic level, producing no or little oil with it. Oil and gas companies set aside a lot of resources for implementing strategies to effectively manage the production of the excessive water to minimize the environmental and economic impact of the produced water.However, due to lack of proper diagnostic techniques, the water shutoff technologies are not always proficiently applied. Most of the conventional techniques used for water diagnosis are only capable of identifying the existence of excess water and cannot pinpoint the exact type and cause of the water production. A common industrial practice is to monitor the trend of changes in WOR against time to identify two types of WPMs, namely coning and channelling. Although, in specific scenarios this approach may give reasonable results, it has been demonstrated that the WOR plots are not general and there are deficiencies in the current usage of these plots.Stepping away from traditional approach, we extracted predictive data points from plots of WOR against the oil recovery factor. We considered three different scenarios of pre-water production, post-water production with static reservoir characteristics and postwater without static reservoir characteristics for investigation. Next, we used tree-based ensemble classifiers to integrate the extracted data points with a range of basic reservoir characteristics and to unleash the predictive information hidden in the integrated data. Interpretability of the generated ensemble classifiers were improved by constructing a new dataset smeared from the original dataset, and generating a depictive tree for each ensemble using a combination of the new and original datasets. To generate the depictive tree we used a new class of tree classifiers called logistic model tree (LMT). LMT combines the linear logistic regression with the classification algorithm to overcome the disadvantages associated with either method.Our results show high prediction accuracy rates of at least 90%, 93% and 82% for the three considered scenarios and easy to implement workflow. Adoption of this methodology would lead to accurate and timely management of water production saving oil and gas companies considerable time and money
Decomposition techniques for computational limit analysis
Limit analysis is relevant in many practical engineering areas such as the design of mechanical structure or the analysis of soil mechanics. The theory of limit analysis assumes a rigid, perfectly-plastic material to model the collapse of a solid that is subjected to a static load distribution.
Within this context, the problem of limit analysis is to consider a continuum that is subjected to a fixed force distribution consisting of both volume and surfaces loads. Then the objective is to obtain the maximum multiple of this force distribution that causes the collapse of the body. This multiple is usually called collapse multiplier. This collapse multiplier can be obtained analytically by solving an infinite dimensional nonlinear optimisation problem. Thus the computation of the multiplier requires two steps, the first step is to discretise its corresponding analytical problem by the introduction of finite dimensional spaces and the second step is to solve a nonlinear
optimisation problem, which represents the major difficulty and challenge in the numerical solution process.
Solving this optimisation problem, which may become very large and computationally expensive in three dimensional problems, is the second important step. Recent techniques have allowed scientists to determine upper and lower bounds of the load factor under which the structure will collapse. Despite the attractiveness of these results, their application to practical examples is still hampered by the size of the resulting optimisation process. Thus a remedy to this is the use of decomposition methods and to parallelise the corresponding optimisation problem.
The aim of this work is to present a decomposition technique which can reduce the memory requirements and computational cost of this type of problems. For this purpose, we exploit the important feature of the underlying optimisation problem: the objective function contains one scaler variable. The main contributes of the thesis are, rewriting the constraints of the problem as the intersection of appropriate sets, and proposing efficient algorithmic strategies to iteratively solve the decomposition algorithm.El análisis en estados lÃmite es una herramienta relente en muchas aplicaciones de la ingenierÃa como por ejemplo en el análisis de estructuras o en mecánica del suelo. La teorÃa de estados lÃmite asume un material rÃgido con plasticidad perfecta para modelar la capacidad portante y los mecanismos de derrumbe de un sólido sometido a una distribución de cargas estáticas. En este contexto, el problema en estados lÃmite considera el continuo sometido a una distribución de cargas, tanto volumétricas como de superficie, y tiene como objetivo hallar el máximo multiplicador de la carga que provoca el derrumbe del cuerpo. Este valor se conoce como el máximo factor de carga, y puede ser calculado resolviendo un problema de optimización no lineal de dimensión infinita. Desde el punto de vista computacional, se requieren pues dos pasos: la discretización del problema analÃtico mediante el uso de espacios de dimensión finita, y la resolución del problema de optimización resultante. Este último paso representa uno de los mayores retos en el proceso del cálculo del factor de carga. El problema de optimización mencionado puede ser de gran tamaño y con un alto coste computacional, sobretodo en el análisis lÃmite tridimensional. Técnicas recientes han permitido a investigadores e ingenieros determinar cotas superiores e inferiores del factor de carga. A pesar del atractivo de estos resultados, su aplicación práctica en ejemplos realistas está todavÃa obstaculizada por el tamaño del problema de optimización resultante. Posibles remedios a este obstáculo son el diseño de técnicas de descomposición y la paralelizarÃan del problema de optimización. El objetivo de este trabajo es presentar una técnica de descomposición que pueda reducir los requerimientos y el coste computacional de este tipo de problemas. Con este propósito, se explotan una propiedad importante del problema de optimización: la función objetivo contiene una único escalar (el factor de carga). La contribución principal de la tesis es el replanteamiento del problema de optimización como la intersección de dos conjuntos, y la propuesta de un algoritmo eficiente para su resolución iterativa.Postprint (published version
A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction
With the advance in Artificial Intelligence, humanoid robots start to interact with ordinary people based on the growing understanding of psychological processes. Accumulating evidences in Human Robot Interaction (HRI) suggest that researches are focusing on making an emotional communication between human and robot for creating a social perception, cognition, desired interaction and sensation.
Furthermore, robots need to receive human emotion and optimize their behavior to help and interact with a human being in various environments. The most natural way to recognize basic emotions is extracting sets of features from human speech, facial expression and body gesture. A system for recognition of emotions based on speech analysis and facial features extraction can have interesting applications in Human-Robot Interaction. Thus, the Human-Robot Interaction ontology explains how the knowledge of these fundamental sciences is applied in physics (sound analyses), mathematics (face detection and perception), philosophy theory (behavior) and robotic science context.
In this project, we carry out a study to recognize basic emotions (sadness, surprise, happiness, anger, fear and disgust). Also, we propose a methodology and a software program for classification of emotions based on speech analysis and facial features extraction.
The speech analysis phase attempted to investigate the appropriateness of using acoustic (pitch value, pitch peak, pitch range, intensity and formant), phonetic (speech rate) properties of emotive speech with the freeware program PRAAT, and consists of generating and analyzing a graph of speech signals. The proposed architecture investigated the appropriateness of analyzing emotive speech with the minimal use of signal processing algorithms. 30 participants to the experiment had to repeat five sentences in English (with durations typically between 0.40 s and 2.5 s) in order to extract data relative to pitch (value, range and peak) and rising-falling intonation. Pitch alignments (peak, value and range) have been evaluated and the results have been compared with intensity and speech rate.
The facial feature extraction phase uses the mathematical formulation (B\ue9zier curves) and the geometric analysis of the facial image, based on measurements of a set of Action Units (AUs) for classifying the emotion. The proposed technique consists of three steps: (i) detecting the facial region within the image, (ii) extracting and classifying the facial features, (iii) recognizing the emotion. Then, the new data have been merged with reference data in order to recognize the basic emotion.
Finally, we combined the two proposed algorithms (speech analysis and facial expression), in order to design a hybrid technique for emotion recognition. Such technique have been implemented in a software program, which can be employed in Human-Robot Interaction.
The efficiency of the methodology was evaluated by experimental tests on 30 individuals (15 female and 15 male, 20 to 48 years old) form different ethnic groups, namely: (i) Ten adult European, (ii) Ten Asian (Middle East) adult and (iii) Ten adult American.
Eventually, the proposed technique made possible to recognize the basic emotion in most of the cases
Interacting scalar tensor cosmology in light of SNeIa, CMB, BAO and OHD observational data sets
During this work, an interacting chameleon like scalar field scenario, by
considering SNeIa, CMB, BAO and OHD data sets is investigated. In fact, the
investigation is realized by introducing an ansatz for the effective dark
energy equation of state, which mimics the behaviour of chameleon like models.
Based on this assumption, some cosmological parameters including Hubble,
deceleration and coincidence parameters in such mechanism are analysed. It is
realized that, to estimate the free parameters of a theoretical model, by
regarding the systematic errors it better the whole of the above observational
data sets to be considered. In fact, if one considers SNeIa, CMB and BAO but
disregards OHD it maybe leads to different results. Also to get a better
overlap between the counters with the constraint , the
function could be re-weighted. The relative probability
functions are plotted for marginalized likelihood according to two dimensional confidence
levels , and . Meanwhile, the value of free parameters
which maximize the marginalized likelihoods using above confidence levels are
obtained. In addition, based on these calculations the minimum value of
based on free parameters of an ansatz for the effective dark energy
equation of state are achieved.Comment: Accepted by the European Physical Journal C. 13 pages, 17 figures and
4 tables
- …