71 research outputs found
Nondestructive Testing (NDT)
The aim of this book is to collect the newest contributions by eminent authors in the field of NDT-SHM, both at the material and structure scale. It therefore provides novel insight at experimental and numerical levels on the application of NDT to a wide variety of materials (concrete, steel, masonry, composites, etc.) in the field of Civil Engineering and Architecture
Multi-sensor data fusion in mobile devices for the identification of Activities of Daily Living
Following the recent advances in technology and the growing use of mobile devices such as
smartphones, several solutions may be developed to improve the quality of life of users in the
context of Ambient Assisted Living (AAL). Mobile devices have different available sensors, e.g.,
accelerometer, gyroscope, magnetometer, microphone and Global Positioning System (GPS)
receiver, which allow the acquisition of physical and physiological parameters for the
recognition of different Activities of Daily Living (ADL) and the environments in which they are
performed. The definition of ADL includes a well-known set of tasks, which include basic selfcare
tasks, based on the types of skills that people usually learn in early childhood, including
feeding, bathing, dressing, grooming, walking, running, jumping, climbing stairs, sleeping,
watching TV, working, listening to music, cooking, eating and others. On the context of AAL,
some individuals (henceforth called user or users) need particular assistance, either because
the user has some sort of impairment, or because the user is old, or simply because users
need/want to monitor their lifestyle. The research and development of systems that provide a
particular assistance to people is increasing in many areas of application. In particular, in the
future, the recognition of ADL will be an important element for the development of a personal
digital life coach, providing assistance to different types of users. To support the recognition
of ADL, the surrounding environments should be also recognized to increase the reliability of
these systems.
The main focus of this Thesis is the research on methods for the fusion and classification of the
data acquired by the sensors available in off-the-shelf mobile devices in order to recognize ADL
in almost real-time, taking into account the large diversity of the capabilities and
characteristics of the mobile devices available in the market. In order to achieve this objective,
this Thesis started with the review of the existing methods and technologies to define the
architecture and modules of the method for the identification of ADL. With this review and
based on the knowledge acquired about the sensors available in off-the-shelf mobile devices,
a set of tasks that may be reliably identified was defined as a basis for the remaining research
and development to be carried out in this Thesis. This review also identified the main stages
for the development of a new method for the identification of the ADL using the sensors
available in off-the-shelf mobile devices; these stages are data acquisition, data processing,
data cleaning, data imputation, feature extraction, data fusion and artificial intelligence. One
of the challenges is related to the different types of data acquired from the different sensors,
but other challenges were found, including the presence of environmental noise, the positioning
of the mobile device during the daily activities, the limited capabilities of the mobile devices
and others. Based on the acquired data, the processing was performed, implementing data
cleaning and feature extraction methods, in order to define a new framework for the recognition of ADL. The data imputation methods were not applied, because at this stage of
the research their implementation does not have influence in the results of the identification
of the ADL and environments, as the features are extracted from a set of data acquired during
a defined time interval and there are no missing values during this stage. The joint selection of
the set of usable sensors and the identifiable set of tasks will then allow the development of a
framework that, considering multi-sensor data fusion technologies and context awareness, in
coordination with other information available from the user context, such as his/her agenda
and the time of the day, will allow to establish a profile of the tasks that the user performs in
a regular activity day. The classification method and the algorithm for the fusion of the features
for the recognition of ADL and its environments needs to be deployed in a machine with some
computational power, while the mobile device that will use the created framework, can
perform the identification of the ADL using a much less computational power. Based on the
results reported in the literature, the method chosen for the recognition of the ADL is composed
by three variants of Artificial Neural Networks (ANN), including simple Multilayer Perceptron
(MLP) networks, Feedforward Neural Networks (FNN) with Backpropagation, and Deep Neural
Networks (DNN).
Data acquisition can be performed with standard methods. After the acquisition, the data must
be processed at the data processing stage, which includes data cleaning and feature extraction
methods. The data cleaning method used for motion and magnetic sensors is the low pass filter,
in order to reduce the noise acquired; but for the acoustic data, the Fast Fourier Transform
(FFT) was applied to extract the different frequencies. When the data is clean, several features
are then extracted based on the types of sensors used, including the mean, standard deviation,
variance, maximum value, minimum value and median of raw data acquired from the motion
and magnetic sensors; the mean, standard deviation, variance and median of the maximum
peaks calculated with the raw data acquired from the motion and magnetic sensors; the five
greatest distances between the maximum peaks calculated with the raw data acquired from
the motion and magnetic sensors; the mean, standard deviation, variance, median and 26 Mel-
Frequency Cepstral Coefficients (MFCC) of the frequencies obtained with FFT based on the raw
data acquired from the microphone data; and the distance travelled calculated with the data
acquired from the GPS receiver. After the extraction of the features, these will be grouped in
different datasets for the application of the ANN methods and to discover the method and
dataset that reports better results. The classification stage was incrementally developed,
starting with the identification of the most common ADL (i.e., walking, running, going upstairs,
going downstairs and standing activities) with motion and magnetic sensors. Next, the
environments were identified with acoustic data, i.e., bedroom, bar, classroom, gym, kitchen,
living room, hall, street and library. After the environments are recognized, and based on the
different sets of sensors commonly available in the mobile devices, the data acquired from the
motion and magnetic sensors were combined with the recognized environment in order to
differentiate some activities without motion, i.e., sleeping and watching TV. The number of recognized activities in this stage was increased with the use of the distance travelled,
extracted from the GPS receiver data, allowing also to recognize the driving activity.
After the implementation of the three classification methods with different numbers of
iterations, datasets and remaining configurations in a machine with high processing
capabilities, the reported results proved that the best method for the recognition of the most
common ADL and activities without motion is the DNN method, but the best method for the
recognition of environments is the FNN method with Backpropagation. Depending on the
number of sensors used, this implementation reports a mean accuracy between 85.89% and
89.51% for the recognition of the most common ADL, equals to 86.50% for the recognition of
environments, and equals to 100% for the recognition of activities without motion, reporting
an overall accuracy between 85.89% and 92.00%.
The last stage of this research work was the implementation of the structured framework for
the mobile devices, verifying that the FNN method requires a high processing power for the
recognition of environments and the results reported with the mobile application are lower
than the results reported with the machine with high processing capabilities used. Thus, the
DNN method was also implemented for the recognition of the environments with the mobile
devices. Finally, the results reported with the mobile devices show an accuracy between 86.39%
and 89.15% for the recognition of the most common ADL, equal to 45.68% for the recognition
of environments, and equal to 100% for the recognition of activities without motion, reporting
an overall accuracy between 58.02% and 89.15%.
Compared with the literature, the results returned by the implemented framework show only
a residual improvement. However, the results reported in this research work comprehend the
identification of more ADL than the ones described in other studies. The improvement in the
recognition of ADL based on the mean of the accuracies is equal to 2.93%, but the maximum
number of ADL and environments previously recognized was 13, while the number of ADL and
environments recognized with the framework resulting from this research is 16. In conclusion,
the framework developed has a mean improvement of 2.93% in the accuracy of the recognition
for a larger number of ADL and environments than previously reported.
In the future, the achievements reported by this PhD research may be considered as a start
point of the development of a personal digital life coach, but the number of ADL and
environments recognized by the framework should be increased and the experiments should be
performed with different types of devices (i.e., smartphones and smartwatches), and the data
imputation and other machine learning methods should be explored in order to attempt to
increase the reliability of the framework for the recognition of ADL and its environments.Após os recentes avanços tecnológicos e o crescente uso dos dispositivos móveis, como por
exemplo os smartphones, várias soluções podem ser desenvolvidas para melhorar a qualidade
de vida dos utilizadores no contexto de Ambientes de Vida Assistida (AVA) ou Ambient Assisted
Living (AAL). Os dispositivos móveis integram vários sensores, tais como acelerómetro,
giroscópio, magnetómetro, microfone e recetor de Sistema de Posicionamento Global (GPS),
que permitem a aquisição de vários parâmetros fÃsicos e fisiológicos para o reconhecimento de
diferentes Atividades da Vida Diária (AVD) e os seus ambientes. A definição de AVD inclui um
conjunto bem conhecido de tarefas que são tarefas básicas de autocuidado, baseadas nos tipos
de habilidades que as pessoas geralmente aprendem na infância. Essas tarefas incluem
alimentar-se, tomar banho, vestir-se, fazer os cuidados pessoais, caminhar, correr, pular, subir
escadas, dormir, ver televisão, trabalhar, ouvir música, cozinhar, comer, entre outras. No
contexto de AVA, alguns indivÃduos (comumente chamados de utilizadores) precisam de
assistência particular, seja porque o utilizador tem algum tipo de deficiência, seja porque é
idoso, ou simplesmente porque o utilizador precisa/quer monitorizar e treinar o seu estilo de
vida. A investigação e desenvolvimento de sistemas que fornecem algum tipo de assistência
particular está em crescente em muitas áreas de aplicação. Em particular, no futuro, o
reconhecimento das AVD é uma parte importante para o desenvolvimento de um assistente
pessoal digital, fornecendo uma assistência pessoal de baixo custo aos diferentes tipos de
pessoas. pessoas. Para ajudar no reconhecimento das AVD, os ambientes em que estas se
desenrolam devem ser reconhecidos para aumentar a fiabilidade destes sistemas.
O foco principal desta Tese é o desenvolvimento de métodos para a fusão e classificação dos
dados adquiridos a partir dos sensores disponÃveis nos dispositivos móveis, para o
reconhecimento quase em tempo real das AVD, tendo em consideração a grande diversidade
das caracterÃsticas dos dispositivos móveis disponÃveis no mercado. Para atingir este objetivo,
esta Tese iniciou-se com a revisão dos métodos e tecnologias existentes para definir a
arquitetura e os módulos do novo método de identificação das AVD. Com esta revisão da
literatura e com base no conhecimento adquirido sobre os sensores disponÃveis nos dispositivos
móveis disponÃveis no mercado, um conjunto de tarefas que podem ser identificadas foi
definido para as pesquisas e desenvolvimentos desta Tese. Esta revisão também identifica os
principais conceitos para o desenvolvimento do novo método de identificação das AVD,
utilizando os sensores, são eles: aquisição de dados, processamento de dados, correção de
dados, imputação de dados, extração de caracterÃsticas, fusão de dados e extração de
resultados recorrendo a métodos de inteligência artificial. Um dos desafios está relacionado
aos diferentes tipos de dados adquiridos pelos diferentes sensores, mas outros desafios foram
encontrados, sendo os mais relevantes o ruÃdo ambiental, o posicionamento do dispositivo durante a realização das atividades diárias, as capacidades limitadas dos dispositivos móveis.
As diferentes caracterÃsticas das pessoas podem igualmente influenciar a criação dos métodos,
escolhendo pessoas com diferentes estilos de vida e caracterÃsticas fÃsicas para a aquisição e
identificação dos dados adquiridos a partir de sensores. Com base nos dados adquiridos,
realizou-se o processamento dos dados, implementando-se métodos de correção dos dados e a
extração de caracterÃsticas, para iniciar a criação do novo método para o reconhecimento das
AVD. Os métodos de imputação de dados foram excluÃdos da implementação, pois não iriam
influenciar os resultados da identificação das AVD e dos ambientes, na medida em que são
utilizadas as caracterÃsticas extraÃdas de um conjunto de dados adquiridos durante um intervalo
de tempo definido.
A seleção dos sensores utilizáveis, bem como das AVD identificáveis, permitirá o
desenvolvimento de um método que, considerando o uso de tecnologias para a fusão de dados
adquiridos com múltiplos sensores em coordenação com outras informações relativas ao
contexto do utilizador, tais como a agenda do utilizador, permitindo estabelecer um perfil de
tarefas que o utilizador realiza diariamente. Com base nos resultados obtidos na literatura, o
método escolhido para o reconhecimento das AVD são as diferentes variantes das Redes
Neuronais Artificiais (RNA), incluindo Multilayer Perceptron (MLP), Feedforward Neural
Networks (FNN) with Backpropagation and Deep Neural Networks (DNN). No final, após a
criação dos métodos para cada fase do método para o reconhecimento das AVD e ambientes, a
implementação sequencial dos diferentes métodos foi realizada num dispositivo móvel para
testes adicionais.
Após a definição da estrutura do método para o reconhecimento de AVD e ambientes usando
dispositivos móveis, verificou-se que a aquisição de dados pode ser realizada com os métodos
comuns. Após a aquisição de dados, os mesmos devem ser processados no módulo de
processamento de dados, que inclui os métodos de correção de dados e de extração de
caracterÃsticas. O método de correção de dados utilizado para sensores de movimento e
magnéticos é o filtro passa-baixo de modo a reduzir o ruÃdo, mas para os dados acústicos, a
Transformada Rápida de Fourier (FFT) foi aplicada para extrair as diferentes frequências.
Após a correção dos dados, as diferentes caracterÃsticas foram extraÃdas com base nos tipos de
sensores usados, sendo a média, desvio padrão, variância, valor máximo, valor mÃnimo e
mediana de dados adquiridos pelos sensores magnéticos e de movimento, a média, desvio
padrão, variância e mediana dos picos máximos calculados com base nos dados adquiridos pelos
sensores magnéticos e de movimento, as cinco maiores distâncias entre os picos máximos
calculados com os dados adquiridos dos sensores de movimento e magnéticos, a média, desvio
padrão, variância e 26 Mel-Frequency Cepstral Coefficients (MFCC) das frequências obtidas
com FFT com base nos dados obtidos a partir do microfone, e a distância calculada com os
dados adquiridos pelo recetor de GPS. Após a extração das caracterÃsticas, as mesmas são agrupadas em diferentes conjuntos de dados
para a aplicação dos métodos de RNA de modo a descobrir o método e o conjunto de
caracterÃsticas que reporta melhores resultados. O módulo de classificação de dados foi
incrementalmente desenvolvido, começando com a identificação das AVD comuns com sensores
magnéticos e de movimento, i.e., andar, correr, subir escadas, descer escadas e parado. Em
seguida, os ambientes são identificados com dados de sensores acústicos, i.e., quarto, bar, sala
de aula, ginásio, cozinha, sala de estar, hall, rua e biblioteca. Com base nos ambientes
reconhecidos e os restantes sensores disponÃveis nos dispositivos móveis, os dados adquiridos
dos sensores magnéticos e de movimento foram combinados com o ambiente reconhecido para
diferenciar algumas atividades sem movimento (i.e., dormir e ver televisão), onde o número
de atividades reconhecidas nesta fase aumenta com a fusão da distância percorrida, extraÃda
a partir dos dados do recetor GPS, permitindo também reconhecer a atividade de conduzir.
Após a implementação dos três métodos de classificação com diferentes números de iterações,
conjuntos de dados e configurações numa máquina com alta capacidade de processamento, os
resultados relatados provaram que o melhor método para o reconhecimento das atividades
comuns de AVD e atividades sem movimento é o método DNN, mas o melhor método para o
reconhecimento de ambientes é o método FNN with Backpropagation. Dependendo do número
de sensores utilizados, esta implementação reporta uma exatidão média entre 85,89% e 89,51%
para o reconhecimento das AVD comuns, igual a 86,50% para o reconhecimento de ambientes,
e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão
global entre 85,89% e 92,00%.
A última etapa desta Tese foi a implementação do método nos dispositivos móveis, verificando
que o método FNN requer um alto poder de processamento para o reconhecimento de
ambientes e os resultados reportados com estes dispositivos são inferiores aos resultados
reportados com a máquina com alta capacidade de processamento utilizada no
desenvolvimento do método. Assim, o método DNN foi igualmente implementado para o
reconhecimento dos ambientes com os dispositivos móveis. Finalmente, os resultados relatados
com os dispositivos móveis reportam uma exatidão entre 86,39% e 89,15% para o
reconhecimento das AVD comuns, igual a 45,68% para o reconhecimento de ambientes, e igual
a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão geral
entre 58,02% e 89,15%.
Com base nos resultados relatados na literatura, os resultados do método desenvolvido mostram
uma melhoria residual, mas os resultados desta Tese identificam mais AVD que os demais
estudos disponÃveis na literatura. A melhoria no reconhecimento das AVD com base na média
das exatidões é igual a 2,93%, mas o número máximo de AVD e ambientes reconhecidos pelos
estudos disponÃveis na literatura é 13, enquanto o número de AVD e ambientes reconhecidos
com o método implementado é 16. Assim, o método desenvolvido tem uma melhoria de 2,93%
na exatidão do reconhecimento num maior número de AVD e ambientes. Como trabalho futuro, os resultados reportados nesta Tese podem ser considerados um ponto
de partida para o desenvolvimento de um assistente digital pessoal, mas o número de ADL e
ambientes reconhecidos pelo método deve ser aumentado e as experiências devem ser
repetidas com diferentes tipos de dispositivos móveis (i.e., smartphones e smartwatches), e os
métodos de imputação e outros métodos de classificação de dados devem ser explorados de
modo a tentar aumentar a confiabilidade do método para o reconhecimento das AVD e
ambientes
Quantifying the Effects of Knee Joint Biomechanics on Acoustical Emissions
The knee is one of the most injured body parts, causing 18 million patients to be seen in clinics every year. Because the knee is a weight-bearing joint, it is prone to pathologies such as osteoarthritis and ligamentous injuries. Existing technologies for monitoring knee health can provide accurate assessment and diagnosis for acute injuries. However, they are mainly confined to clinical or laboratory settings only, time-consuming, expensive, and not well-suited for longitudinal monitoring. Developing a novel technology for joint health assessment beyond the clinic can further provide insights on the rehabilitation process and quantitative usage of the knee joint.
To better understand the underlying properties and fundamentals of joint sounds, this research will investigate the relationship between the changes in the knee joint structure (i.e. structural damage and joint contact force) and the JAEs while developing novel techniques for analyzing these sounds. We envision that the possibility of quantifying joint structure and joint load usage from these acoustic sensors would advance the potential of JAE as the next biomarker of joint health that can be captured with wearable technology. First, we developed a novel processing technique for JAEs that quantify on the structural change of the knee from injured athletes and human lower-limb cadaver models. Second, we quantified whether JAEs can detect the increase in the mechanical stress on the knee joint using an unsupervised graph mining algorithm. Lastly, we quantified the directional bias of the load distribution between medial and lateral compartment using JAEs. Understanding and monitoring the quantitative usage of knee loads in daily activities can broaden the implications for longitudinal joint health monitoring.Ph.D
Enabling Context-Awareness in Mobile Systems via Multi-Modal Sensing
<p>The inclusion of rich sensors on modern smartphones has changed mobile phones from simple communication devices to powerful human-centric sensing platforms. Similar trends are influencing other personal gadgets such as the tablets, cameras, and wearable devices like the Google glass. Together, these sensors can provide</p><p>a high-resolution view of the user's context, ranging from simple information like locations and activities, to high-level inferences about the users' intention, behavior, and social interactions. Understanding such context can help solving existing system-side</p><p>challenges and eventually enable a new world of real-life applications. </p><p>In this thesis, we propose to learn human behavior via multi-modal sensing. The intuition is that human behaviors leave footprints on different sensing dimensions - visual, acoustic, motion and in cyber space. By collaboratively analyzing these footprints, the system can obtain valuable insights about the user. We show that the</p><p>analysis results can lead to a series of applications including capturing life-logging videos, tagging user-generated photos and enabling new ways for human-object interactions. Moreover, the same intuition may potentially be applied to enhance existing</p><p>system-side functionalities - offloading, prefetching and compression.</p>Dissertatio
Hidden Markov Models
Hidden Markov Models (HMMs), although known for decades, have made a big career nowadays and are still in state of development. This book presents theoretical issues and a variety of HMMs applications in speech recognition and synthesis, medicine, neurosciences, computational biology, bioinformatics, seismology, environment protection and engineering. I hope that the reader will find this book useful and helpful for their own research
Exploring Audio Sensing in Detecting Social Interactions Using Smartphone Devices
In recent years, the fast proliferation of smartphones devices has provided powerful and portable methodologies for integrating sensing systems which can run continuously and provide feedback in real-time. The mobile crowd-sensing of human behaviour is an emerging computing paradigm that offers a challenge of sensing everyday social interactions performed by people who carry smartphone devices upon themselves. Typical smartphone sensors and the mobile crowd-sensing paradigm compose a process where the sensors present, such as the microphone, are used to infer social relationships between people in diverse social settings, where environmental factors can be dynamic and the infrastructure of buildings can vary.
The typical approaches in detecting social interactions between people consider the use of co-location as a proxy for real-world interactions. Such approaches can under-perform in challenging situations where multiple social interactions can occur within close proximity to each other, for example when people are in a queue at the supermarket but not a part of the same social interaction. Other approaches involve a limitation where all participants of a social interaction must carry a smartphone device with themselves at all times and each smartphone must have the sensing app installed. The problem here is the feasibility of the sensing system, which relies heavily on each participant's smartphone acting as nodes within a social graph, connected together with weighted edges of proximity between the devices; when users uninstall the app or disable background sensing, the system is unable to accurately determine the correct number of participants.
In this thesis, we present two novel approaches to detecting co-located social interac- tions using smartphones. The first relies on the use of WiFi signals and audio signals
to distinguish social groups interacting within a few meters from each other with 88% precision. We orchestrated preliminary experiments using WiFi as a proxy for co-location between people who are socially interacting. Initial results showed that in more challenging scenarios, WiFi is not accurate enough to determine if people are socially interacting within the same social group. We then made use of audio as a second modality to capture the sound patterns of conversations to identify and segment social groups within close proximity to each other. Through a range of real-world experiments (social interactions in meeting scenarios, coffee shop scenarios, conference scenarios), we demonstrate a technique that utilises WiFi fingerprinting, along with sound fingerprinting to identify these social groups. We built a system which performs well, and then optimized the power consumption and improved the performance to 88% precision in the most challenging scenarios using duty cycling and data averaging techniques.
The second approach explores the feasibility of detecting social interactions without the need of all social contacts to carry a social sensing device. This work explores the use of supervised and unsupervised Deep Learning techniques before concluding on the use of an Autoencoder model to perform a Speaker Identification task. We demonstrate how machine learning can be used with the audio data collected from a singular device as a speaker identification framework. Speech from people is used as the input to our Autoencoder model and then classified against a list of "social contacts" to determine if the user has spoken a person before or not. By doing this, the system can count the number of social contacts belonging to the user, and develop a database of common social contacts. Through the use 100 randomly-generated social conversations and the use of state-of-the-art Deep Learning techniques, we demonstrate how this system can accurately distinguish new and existing speakers from a data set of voices, to count the number of daily social interactions a user encounters with a precision of 75%. We then optimize the model using Hyperparameter Optimization to ensure that the model is most optimal for the task. Unlike most systems in the literature, this approach would work without the need to modify the existing infrastructure of a building, and without all participants needing to install the same ap
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy
The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference
Machine learning for automatic analysis of affective behaviour
The automated analysis of affect has been gaining rapidly increasing attention by researchers over the past two decades, as it constitutes a fundamental step towards achieving next-generation computing technologies and integrating them into everyday life (e.g. via affect-aware, user-adaptive interfaces, medical imaging, health assessment, ambient intelligence etc.). The work presented in this thesis focuses on several fundamental problems manifesting in the course towards the achievement of reliable, accurate and robust affect sensing systems. In more detail, the motivation behind this work lies in recent developments in the field, namely (i) the creation of large, audiovisual databases for affect analysis in the so-called ''Big-Data`` era, along with (ii) the need to deploy systems under demanding, real-world conditions. These developments led to the requirement for the analysis of emotion expressions continuously in time, instead of merely processing static images, thus unveiling the wide range of temporal dynamics related to human behaviour to researchers. The latter entails another deviation from the traditional line of research in the field: instead of focusing on predicting posed, discrete basic emotions (happiness, surprise etc.), it became necessary to focus on spontaneous, naturalistic expressions captured under settings more proximal to real-world conditions, utilising more expressive emotion descriptions than a set of discrete labels. To this end, the main motivation of this thesis is to deal with challenges arising from the adoption of continuous dimensional emotion descriptions under naturalistic scenarios, considered to capture a much wider spectrum of expressive variability than basic emotions, and most importantly model emotional states which are commonly expressed by humans in their everyday life. In the first part of this thesis, we attempt to demystify the quite unexplored problem of predicting continuous emotional dimensions. This work is amongst the first to explore the problem of predicting emotion dimensions via multi-modal fusion, utilising facial expressions, auditory cues and shoulder gestures. A major contribution of the work presented in this thesis lies in proposing the utilisation of various relationships exhibited by emotion dimensions in order to improve the prediction accuracy of machine learning methods - an idea which has been taken on by other researchers in the field since. In order to experimentally evaluate this, we extend methods such as the Long Short-Term Memory Neural Networks (LSTM), the Relevance Vector Machine (RVM) and Canonical Correlation Analysis (CCA) in order to exploit output relationships in learning. As it is shown, this increases the accuracy of machine learning models applied to this task.
The annotation of continuous dimensional emotions is a tedious task, highly prone to the influence of various types of noise. Performed real-time by several annotators (usually experts), the annotation process can be heavily biased by factors such as subjective interpretations of the emotional states observed, the inherent ambiguity of labels related to human behaviour, the varying reaction lags exhibited by each annotator as well as other factors such as input device noise and annotation errors. In effect, the annotations manifest a strong spatio-temporal annotator-specific bias. Failing to properly deal with annotation bias and noise leads to an inaccurate ground truth, and therefore to ill-generalisable machine learning models. This deems the proper fusion of multiple annotations, and the inference of a clean, corrected version of the ``ground truth'' as one of the most significant challenges in the area. A highly important contribution of this thesis lies in the introduction of Dynamic Probabilistic Canonical Correlation Analysis (DPCCA), a method aimed at fusing noisy continuous annotations. By adopting a private-shared space model, we isolate the individual characteristics that are annotator-specific and not shared, while most importantly we model the common, underlying annotation which is shared by annotators (i.e., the derived ground truth). By further learning temporal dynamics and incorporating a time-warping process, we are able to derive a clean version of the ground truth given multiple annotations, eliminating temporal discrepancies and other nuisances.
The integration of the temporal alignment process within the proposed private-shared space model deems DPCCA suitable for the problem of temporally aligning human behaviour; that is, given temporally unsynchronised sequences (e.g., videos of two persons smiling), the goal is to generate the temporally synchronised sequences (e.g., the smile apex should co-occur in the videos). Temporal alignment is an important problem for many applications where multiple datasets need to be aligned in time. Furthermore, it is particularly suitable for the analysis of facial expressions, where the activation of facial muscles (Action Units) typically follows a set of predefined temporal phases. A highly challenging scenario is when the observations are perturbed by gross, non-Gaussian noise (e.g., occlusions), as is often the case when analysing data acquired under real-world conditions. To account for non-Gaussian noise, a robust variant of Canonical Correlation Analysis (RCCA) for robust fusion and temporal alignment is proposed. The model captures the shared, low-rank subspace of the observations, isolating the gross noise in a sparse noise term. RCCA is amongst the first robust variants of CCA proposed in literature, and as we show in related experiments outperforms other, state-of-the-art methods for related tasks such as the fusion of multiple modalities under gross noise.
Beyond private-shared space models, Component Analysis (CA) is an integral component of most computer vision systems, particularly in terms of reducing the usually high-dimensional input spaces in a meaningful manner pertaining to the task-at-hand (e.g., prediction, clustering). A final, significant contribution of this thesis lies in proposing the first unifying framework for probabilistic component analysis. The proposed framework covers most well-known CA methods, such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), providing further theoretical insights into the workings of CA. Moreover, the proposed framework is highly flexible, enabling novel CA methods to be generated by simply manipulating the connectivity of latent variables (i.e. the latent neighbourhood). As shown experimentally, methods derived via the proposed framework outperform other equivalents in several problems related to affect sensing and facial expression analysis, while providing advantages such as reduced complexity and explicit variance modelling.Open Acces
- …