172 research outputs found
Machine Learning Based Defect Detection in Robotic Wire Arc Additive Manufacturing
In the last ten years, research interests in various aspects of the Wire Arc Additive Manufacturing (WAAM) processes have grown exponentially. More recently, efforts to integrate an automatic quality assurance system for the WAAM process are increasing. No reliable online monitoring system for the WAAM process is a key gap to be filled for the commercial application of the technology, as it will enable the components produced by the process to be qualified for the relevant standards and hence be fit for use in critical applications in the aerospace or naval sectors. However, most of the existing monitoring methods only detect or solve issues from a specific sensor, no monitoring system integrated with different sensors or data sources is developed in WAAM in the last three years. In addition, complex principles and calculations of conventional algorithms make it hard to be applied in the manufacturing of WAAM as the character of a long manufacturing cycle. Intelligent algorithms provide in-built advantages in processing and analysing data, especially for large datasets generated during the long manufacturing cycles. In this research, in order to establish an intelligent WAAM defect detection system, two intelligent WAAM defect detection modules are developed successfully. The first module takes welding arc current / voltage signals during the deposition process as inputs and uses algorithms such as support vector machine (SVM) and incremental SVM to identify disturbances and continuously learn new defects. The incremental learning module achieved more than a 90% f1-score on new defects. The second module takes CCD images as inputs and uses object detection algorithms to predict the unfused defect during the WAAM manufacturing process with above 72% mAP. This research paves the path for developing an intelligent WAAM online monitoring system in the future. Together with process modelling, simulation and feedback control, it reveals the future opportunity for a digital twin system
Conditioned haptic perception for 3D localization of nodules in soft tissue palpation with a variable stiffness probe
This paper provides a solution for fast haptic information gain during soft tissue palpation using a Variable Lever Mechanism (VLM) probe. More specifically, we investigate the impact of stiffness variation of the probe to condition likelihood functions of the kinesthetic force and tactile sensors measurements during a palpation task for two sweeping directions. Using knowledge obtained from past probing trials or Finite Element (FE) simulations, we implemented this likelihood conditioning in an autonomous palpation control strategy. Based on a recursive Bayesian inferencing framework, this new control strategy adapts the sweeping direction and the stiffness of the probe to detect abnormal stiff inclusions in soft tissues. This original control strategy for compliant palpation probes shows a sub-millimeter accuracy for the 3D localization of the nodules in a soft tissue phantom as well as a 100% reliability detecting the existence of nodules in a soft phantom
Bayesian and echoic log-surprise for auditory saliency detection
Mención Internacional en el título de doctorAttention is defined as the mechanism that allows the brain to categorize
and prioritize information acquired using our senses and act according to
the environmental context and the available mental resources. The attention
mechanism can be further subdivided into two types: top-down and bottomup.
Top-down attention is goal or task-driven and implies that a participant
has some previous knowledge about the task that he or she is trying to solve.
Alternatively, bottom-up attention only depends on the perceived features
of the target object and its surroundings and is a very fast mechanism that
is believed to be crucial for human survival.
Bottom-up attention is commonly known as saliency or salience, and can
be defined as a property of the signals that are perceived by our senses that
make them attentionally prominent for some reason.
This thesis is related with the concept of saliency detection using automatic
algorithms for audio signals. In recent years progress in the area of
visual saliency research has been remarkable, a topic where the goal consists
of detecting which objects or content from a visual scene are prominent
enough to capture the attention of a spectator. However, this progress has
not been carried out to other alternative modalities. This is the case of auditory
saliency, where there is still no consensus about how to measure the
saliency of an event, and consequently there are no specific labeled datasets
to compare new algorithms and proposals.
In this work two new auditory saliency detection algorithms are presented
and evaluated. For their evaluation, we make use of Acoustic Event
Detection/Classification datasets, whose labels include onset times among
other aspects. We use such datasets and labeling since there is psychological
evidence suggesting that human beings are quite sensitive to the spontaneous
appearance of acoustic objects. We use three datasets: DCASE 2016
(Task 2), MIVIA road audio events and UPC-TALP, totalling 3400 labeled
acoustic events. Regarding the algorithms that we employ for benchmarking,
these comprise techniques for saliency detection designed by Kayser and
Kalinli, a voice activity detector, an energy thresholding method and four
music information retrieval onset detectors: NWPD, WPD, CD and SF.
We put forward two auditory saliency algorithms: Bayesian Log-surprise
and Echoic Log-surprise. The former is an evolution of Bayesian Surprise,
a methodology that by means of the Kullback-Leibler divergence computed
between two consecutive temporal windows is capable of detecting anomalous
or salient events. As the output Surprise signal has some drawbacks
that should be overcome, we introduce some improvements that led to the
approach that we named Bayesian Log-surprise. These include an amplitude
compression stage and the addition of perceptual knowledge to pre-process
the input signal.
The latter, named Echoic Log-surprise, fuses several Bayesian Log-surprise signals computed considering different memory lengths that represent different
temporal scales. The fusion process is performed using statistical
divergences, resulting in saliency signals with certain advantages such as a
significant reduction in the background noise level and a noticeable increase
in the detection scores.
Moreover, since the original Echoic Log-surprise presents certain limitations,
we propose a set of improvements: we test some alternative statistical
divergences, we introduce a new fusion strategy and we change the thresholding
mechanism used to determine if the final output signal is salient or
not for a dynamic thresholding algorithm. Results show that the most significant
modification in terms of performance is the latter, a proposal that
reduces the dispersion observed in the scores produced by the system and
enables online functioning.
Finally, our last analysis concerns the robustness of all the algorithms
presented in this thesis against environmental noise. We use noises of different
natures, from stationary noise to pre-recorded noises acquired in real
environments such as cafeterias, train stations, etc. The results suggest
that for different signal-to-noise ratios the most robust algorithm is Echoic
Log-surprise, since its detection capabilities are the least influenced by noise.La atención es definida como el mecanismo que permite a nuestro cerebro
categorizar y priorizar la información percibida mediante nuestros sentidos,
a la par que ayuda a actuar en función del contexto y los recursos mentales
disponibles. Este mecanismo puede dividirse en dos variantes: top-down y
bottom-up. La atención top-down posee un objetivo que el sujeto pretende
cumplir, e implica que el individuo posee cierto conocimiento previo sobre la
tarea que trata de realizar. Por otra parte, la atención bottom-up depende
exclusivamente de las características físicas percibidas a partir de un objeto
y su entorno, y actúa a partir de dicha información de forma autónoma y
rápida. Se teoriza que dicho mecanismo es crucial para la supervivencia de
los individuos frente a amenazas repentinas.
La atención bottom-up es comúnmente denominada saliencia, y es definida
como una propiedad de las señales que son percibidas por nuestros sentidos
y que por algún motivo destacan sobre el resto de información adquirida.
Esta tesis está relacionada con la detección automática de la saliencia en
señales acústicas mediante la utilización de algoritmos. En los últimos años
el avance en la investigación de la saliencia visual ha sido notable, un tema
en el cual la principal meta consiste en detectar qué objetos o contenido
de una escena visual son lo bastante prominentes para captar la atención
de un espectador. Sin embargo, estos avances no han sido trasladados a
otras modalidades. Tal es el caso de la saliencia auditiva, donde aún no
existe consenso sobre cómo medir la prominencia de un evento acústico,
y en consecuencia no existen bases de datos especializadas que permitan
comparar nuevos algoritmos y modelos.
En este trabajo evaluamos algunos algoritmos de detección de saliencia
auditiva. Para ello, empleamos bases de datos para la detección y clasificación
de eventos acústicos, cuyas etiquetas incluyen el tiempo de inicio
(onset) de dichos eventos entre otras características. Nuestra hipótesis se
basa en estudios psicológicos que sugieren que los seres humanos somos muy
sensibles a la aparición de objetos acústicos. Empleamos tres bases de datos:
DCASE 2016 (Task 2), MIVIA road audio events y UPC-TALP, las cuales
suman en total 3400 eventos etiquetados. Respecto a los algoritmos utilizados
en nuestro sistema de referencia (benchmark), incluimos los algoritmos
de saliencia diseñados por Kayser y Kalinli, un detector de actividad vocal
(VAD), un umbralizador energético y cuatro técnicas para la detección de
onsets en música: NWPD, WPD, CD and SF.
Presentamos dos algoritmos de saliencia auditiva: Bayesian Log-surprise
y Echoic Log-surprise. El primero es una evolución de Bayesian Surprise,
una metodología que utiliza la divergencia de Kullback-Leibler para detectar
eventos salientes o anomalías entre ventanas consecutivas de tiempo. Dado
que la señal producida por Bayesian Surprise posee ciertos inconvenientes
introducimos una serie de mejoras, entre las que destacan una etapa de compresión de la amplitud de la señal de salida y el pre-procesado de la señal de
entrada mediante la utilización de conocimiento perceptual. Denominamos
a esta metodología Bayesian Log-surprise.
Nuestro segundo algoritmo, denominado Echoic Log-surprise, combina la
información de múltiples señales de saliencia producidas mediante Bayesian
Log-surprise considerando distintas escalas temporales. El proceso de fusión
se realiza mediante la utilización de divergencias estadísticas, y las señales
de salida poseen un nivel de ruido menor a la par que un mayor rendimiento
a la hora de detectar eventos salientes.
Además, proponemos una serie de mejoras para Echoic Log-surprise
dado que observamos que presentaba ciertas limitaciones: añadimos nuevas
divergencias estadísticas al sistema para realizar la fusión, diseñamos una
nueva estrategia para llevar a cabo dicho proceso y modificamos el sistema de
umbralizado que originalmente se utilizaba para determinar si un fragmento
de señal era saliente o no. Inicialmente dicho mecanismo era estático, y
proponemos actualizarlo de tal forma se comporte de forma dinámica. Esta
última demuestra ser la mejora más significativa en términos de rendimiento,
ya que reduce la dispersión observada en las puntuaciones de evaluación entre
distintos ficheros de audio, a la par que permite que el algoritmo funcione
online.
El último análisis que proponemos pretende estudiar la robustez de los
algoritmos mencionados en esta tesis frente a ruido ambiental. Empleamos
ruido de diversa índole, desde ruido blanco estacionario hasta señales pregrabadas
en entornos reales tales y como cafeterías, estaciones de tren, etc.
Los resultados sugieren que para distintos valores de relación señal/ruido el
algoritmo más robusto es Echoic Log-surprise, dado que sus capacidades de
detección son las menos afectadas por el ruido.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Fernando Díaz de María.- Secretario: Rubén Solera Ureña.- Vocal: José Luis Pérez Córdob
Eddy current defect response analysis using sum of Gaussian methods
This dissertation is a study of methods to automatedly detect and produce approximations of eddy current differential coil defect signatures in terms of a summed collection of Gaussian functions (SoG). Datasets consisting of varying material, defect size, inspection frequency, and coil diameter were investigated. Dimensionally reduced representations of the defect responses were obtained utilizing common existing reduction methods and novel enhancements to them utilizing SoG Representations. Efficacy of the SoG enhanced representations were studied utilizing common Machine Learning (ML) interpretable classifier designs with the SoG representations indicating significant improvement of common analysis metrics
Hierarchical Bayesian Data Fusion Using Autoencoders
In this thesis, a novel method for tracker fusion is proposed and evaluated for vision-based tracking. This work combines three distinct popular techniques into a recursive Bayesian estimation algorithm. First, semi supervised learning approaches are used to partition data and to train a deep neural network that is capable of capturing normal visual tracking operation and is able to detect anomalous data. We compare various methods by examining their respective receiver operating conditions (ROC) curves, which represent the trade off between specificity and sensitivity for various detection threshold levels. Next, we incorporate the trained neural networks into an existing data fusion algorithm to replace its observation weighing mechanism, which is based on the Mahalanobis distance. We evaluate different semi-supervised learning architectures to determine which is the best for our problem. We evaluated the proposed algorithm on the OTB-50 benchmark dataset and compared its performance to the performance of the constituent trackers as well as with previous fusion. Future work involving this proposed method is to be incorporated into an autonomous following unmanned aerial vehicle (UAV)
Thermography data fusion and non-negative matrix factorization for the evaluation of cultural heritage objects and buildings
The application of the thermal and infrared technology in different areas of research is considerably increasing. These applications involve nondestructive testing, medical analysis (computer aid diagnosis/detection—CAD), and arts and archeology, among many others. In the arts and archeology field, infrared technology provides significant contributions in terms of finding defects of possible impaired regions. This has been done through a wide range of different thermographic experiments and infrared methods. The proposed approach here focuses on application of some known factor analysis methods such as standard nonnegative matrix factorization (NMF) optimized by gradient-descent-based multiplicative rules (SNMF1) and standard NMF optimized by nonnegative least squares active-set algorithm (SNMF2) and eigen-decomposition approaches such as principal component analysis (PCA) in thermography, and candid covariance-free incremental principal component analysis in thermography to obtain the thermal features. On the one hand, these methods are usually applied as preprocessing before clustering for the purpose of segmentation of possible defects. On the other hand, a wavelet-based data fusion combines the data of each method with PCA to increase the accuracy of the algorithm. The quantitative assessment of these approaches indicates considerable segmentation along with the reasonable computational complexity. It shows the promising performance and demonstrated a confirmation for the outlined properties. In particular, a polychromatic wooden statue, a fresco, a painting on canvas, and a building were analyzed using the above-mentioned methods, and the accuracy of defect (or targeted) region segmentation up to 71.98%, 57.10%, 49.27%, and 68.53% was obtained, respectively
- …