340 research outputs found
Smartphone-based human activity recognition
Cotutela Universitat Politècnica de Catalunya i Università degli Studi di GenovaHuman Activity Recognition (HAR) is a multidisciplinary research field that aims to gather data regarding people's behavior and their interaction with the environment in order to deliver valuable context-aware information. It has nowadays contributed to develop human-centered areas of study such as Ambient Intelligence and Ambient Assisted Living, which concentrate on the improvement of people's Quality of Life.
The first stage to accomplish HAR requires to make observations from ambient or wearable sensor technologies. However, in the second case, the search for pervasive, unobtrusive, low-powered, and low-cost devices for achieving this challenging task still has not been fully addressed. In this thesis, we explore the use of smartphones as an alternative approach for performing the identification of physical activities. These self-contained devices, which are widely available in the market, are provided with embedded sensors, powerful computing capabilities and wireless communication technologies that make them highly suitable for this application. This work presents a series of contributions regarding the development of HAR systems with smartphones. In the first place we propose a fully operational system that recognizes in real-time six physical activities while also takes into account the effects of postural transitions that may occur between them. For achieving this, we cover some research topics from signal processing and feature selection of inertial data, to Machine Learning approaches for classification. We employ two sensors (the accelerometer and the gyroscope) for collecting inertial data. Their raw signals are the input of the system and are conditioned through filtering in order to reduce noise and allow the extraction of informative activity features. We also emphasize on the study of Support Vector Machines (SVMs), which are one of the state-of-the-art Machine Learning techniques for classification, and reformulate various of the standard multiclass linear and non-linear methods to find the best trade off between recognition performance, computational costs and energy requirements, which are essential aspects in battery-operated devices such as smartphones. In particular, we propose two multiclass SVMs for activity classification:one linear algorithm which allows to control over dimensionality reduction and system accuracy; and also a non-linear hardware-friendly algorithm that only uses fixed-point arithmetic in the prediction phase and enables a model complexity reduction while maintaining the system performance. The efficiency of the proposed system is verified through extensive experimentation over a HAR dataset which we have generated and made publicly available. It is composed of inertial data collected from a group of 30 participants which performed a set of common daily activities while carrying a smartphone as a wearable device. The results achieved in this research show that it is possible to perform HAR in real-time with a precision near 97\% with
smartphones. In this way, we can employ the proposed methodology in several higher-level applications that require HAR such as ambulatory monitoring of the disabled and the elderly during periods above five days without the need of a battery recharge. Moreover, the proposed algorithms can be adapted to other commercial wearable devices recently introduced in the market (e.g. smartwatches, phablets, and glasses). This will open up new opportunities for developing practical and innovative HAR applications.El Reconocimiento de Actividades Humanas (RAH) es un campo de investigación multidisciplinario que busca recopilar información sobre el comportamiento de las personas y su interacción con el entorno con el propósito de ofrecer información contextual de alta significancia sobre las acciones que ellas realizan. Recientemente, el RAH ha contribuido en el desarrollo de áreas de estudio enfocadas a la mejora de la calidad de vida del hombre tales como: la inteligència ambiental (Ambient Intelligence) y la vida cotidiana asistida por el entorno para personas dependientes (Ambient Assisted Living). El primer paso para conseguir el RAH consiste en realizar observaciones mediante el uso de sensores fijos localizados en el ambiente, o bien portátiles incorporados de forma vestible en el cuerpo humano. Sin embargo, para el segundo caso, aún se dificulta encontrar dispositivos poco invasivos, de bajo consumo energético, que permitan ser llevados a cualquier lugar, y de bajo costo. En esta tesis, nosotros exploramos el uso de teléfonos móviles inteligentes (Smartphones) como una alternativa para el RAH. Estos dispositivos, de uso cotidiano y fácilmente asequibles en el mercado, están dotados de sensores embebidos, potentes capacidades de cómputo y diversas tecnologías de comunicación inalámbrica que los hacen apropiados para esta aplicación. Nuestro trabajo presenta una serie de contribuciones en relación al desarrollo de sistemas para el RAH con Smartphones. En primera instancia proponemos un sistema que permite la detección de seis actividades físicas en tiempo real y que, además, tiene en cuenta las transiciones posturales que puedan ocurrir entre ellas. Con este fin, hemos contribuido en distintos ámbitos que van desde el procesamiento de señales y la selección de características, hasta algoritmos de Aprendizaje Automático (AA). Nosotros utilizamos dos sensores inerciales (el acelerómetro y el giroscopio) para la captura de las señales de movimiento de los usuarios. Estas han de ser procesadas a través de técnicas de filtrado para la reducción de ruido, segmentación y obtención de características relevantes en la detección de actividad. También hacemos énfasis en el estudio de Máquinas de soporte vectorial (MSV) que son uno de los algoritmos de AA más usados en la actualidad. Para ello reformulamos varios de sus métodos estándar (lineales y no lineales) con el propósito de encontrar la mejor combinación de variables que garanticen un buen desempeño del sistema en cuanto a precisión, coste computacional y requerimientos de energía, los cuales son aspectos esenciales en dispositivos portátiles con suministro de energía mediante baterías. En concreto, proponemos dos MSV multiclase para la clasificación de actividad: un algoritmo lineal que permite el balance entre la reducción de la dimensionalidad y la precisión del sistema; y asimismo presentamos un algoritmo no lineal conveniente para dispositivos con limitaciones de hardware que solo utiliza aritmética de punto fijo en la fase de predicción y que permite reducir la complejidad del modelo de aprendizaje mientras mantiene el rendimiento del sistema. La eficacia del sistema propuesto es verificada a través de una experimentación extensiva sobre la base de datos RAH que hemos generado y hecho pública en la red. Esta contiene la información inercial obtenida de un grupo de 30 participantes que realizaron una serie de actividades de la vida cotidiana en un ambiente controlado mientras tenían sujeto a su cintura un smartphone que capturaba su movimiento. Los resultados obtenidos en esta investigación demuestran que es posible realizar el RAH en tiempo real con una precisión cercana al 97%. De esta manera, podemos emplear la metodología propuesta en aplicaciones de alto nivel que requieran el RAH tales como monitorizaciones ambulatorias para personas dependientes (ej. ancianos o discapacitados) durante periodos mayores a cinco días sin la necesidad de recarga de baterías.Postprint (published version
Leveraging Smartphone Sensor Data for Human Activity Recognition
Using smartphones for human activity recognition (HAR) has a wide range of applications including healthcare, daily fitness recording, and anomalous situations alerting. This study focuses on human activity recognition based on smartphone embedded sensors. The proposed human activity recognition system recognizes activities including walking, running, sitting, going upstairs, and going downstairs. Embedded sensors (a tri-axial accelerometer and a gyroscope sensor) are employed for motion data collection. Both time-domain and frequency-domain features are extracted and analyzed. Our experiment results show that time-domain features are good enough to recognize basic human activities. The system is implemented in an Android smartphone platform.
While the focus has been on human activity recognition systems based on a supervised learning approach, an incremental clustering algorithm is investigated. The proposed unsupervised (clustering) activity detection scheme works in an incremental manner, which contains two stages. In the first stage, streamed sensor data will be processed. A single-pass clustering algorithm is used to generate pre-clustered results for the next stage. In the second stage, pre-clustered results will be refined to form the final clusters, which means the clusters are built incrementally by adding one cluster at a time. Experiments on smartphone sensor data of five basic human activities show that the proposed scheme can get comparable results with traditional clustering algorithms but working in a streaming and incremental manner.
In order to develop more accurate activity recognition systems independent of smartphone models, effects of sensor differences across various smartphone models are investigated. We present the impairments of different smartphone embedded sensor models on HAR applications. Outlier removal, interpolation, and filtering in pre-processing stage are proposed as mitigating techniques. Based on datasets collected from four distinct smartphones, the proposed mitigating techniques show positive effects on 10-fold cross validation, device-to-device validation, and leave-one-out validation. Improved performance for smartphone based human activity recognition is observed.
With the efforts of developing human activity recognition systems based on supervised learning approach, investigating a clustering based incremental activity recognition system with its potential applications, and applying techniques for alleviating sensor difference effects, a robust human activity recognition system can be trained in either supervised or unsupervised way and can be adapted to multiple devices with being less dependent on different sensor specifications
A wearable real-time system for physical activity recognition and fall detection
This thesis work designs and implements a wearable system to recognize physical activities and detect fall in real time. Recognizing people’s physical activity has a broad range of applications. These include helping people maintaining their energy balance by developing health assessment and intervention tools, investigating the links between common diseases and levels of physical activity, and providing feedback to motivate individuals to exercise. In addition, fall detection has become a hot research topic due to the increasing population over 65 throughout the world, as well as the serious effects and problems caused by fall.
In this work, the Sun SPOT wireless sensor system is used as the hardware platform to recognize physical activity and detect fall. The sensors with tri-axis accelerometers are used to collect acceleration data, which are further processed and extracted with useful information. The evaluation results from various algorithms indicate that Naive Bayes algorithm works better than other popular algorithms both in accuracy and implementation in this particular application.
This wearable system works in two modes: indoor and outdoor, depending on user’s demand. Naive Bayes classifier is successfully implemented in the Sun SPOT sensor. The results of evaluating sampling rate denote that 20 Hz is an optimal sampling frequency in this application. If only one sensor is available to recognize physical activity, the best location is attaching it to the thigh. If two sensors are available, the combination at the left thigh and the right thigh is the best option, 90.52% overall accuracy in the experiment.
For fall detection, a master sensor is attached to the chest, and a slave sensor is attached to the thigh to collect acceleration data. The results show that all falls are successfully detected. Forward, backward, leftward and rightward falls have been distinguished from standing and walking using the fall detection algorithm. Normal physical activities are not misclassified as fall, and there is no false alarm in fall detection while the user is wearing the system in daily life
Recent Advances in Motion Analysis
The advances in the technology and methodology for human movement capture and analysis over the last decade have been remarkable. Besides acknowledged approaches for kinematic, dynamic, and electromyographic (EMG) analysis carried out in the laboratory, more recently developed devices, such as wearables, inertial measurement units, ambient sensors, and cameras or depth sensors, have been adopted on a wide scale. Furthermore, computational intelligence (CI) methods, such as artificial neural networks, have recently emerged as promising tools for the development and application of intelligent systems in motion analysis. Thus, the synergy of classic instrumentation and novel smart devices and techniques has created unique capabilities in the continuous monitoring of motor behaviors in different fields, such as clinics, sports, and ergonomics. However, real-time sensing, signal processing, human activity recognition, and characterization and interpretation of motion metrics and behaviors from sensor data still representing a challenging problem not only in laboratories but also at home and in the community. This book addresses open research issues related to the improvement of classic approaches and the development of novel technologies and techniques in the domain of motion analysis in all the various fields of application
Deep learning inspired feature engineering for classifying tremor severity
Bio-signals pattern recognition systems can be impacted by several factors with a potential to limit their associated performance and clinical translation. Among these factors, selecting the optimum feature extraction method, that can effectively exploit the interaction between the temporal and spatial information, is the most prominent. Despite the potential of deep learning (DL) models for extracting temporal, spatial, or temporal-spatial information, they are typically restricted by their need for a large amount of training data. The deep wavelet scattering transform (WST) is a relatively recent advancement within the DL literature to replace expensive convolution neural networks models with computationally less demanding methods. However, while some studies have used WST to extract features from biological signals, it has not been investigated before for electromyogram (EMG) and electroencephalogram (EEG) signals feature extraction. To investigate the hypothesis of the usefulness of WST for processing EMG and EEG signals, this study used a tremor dataset collected by the authors from people with tremor disorders. Specifically, the proposed work achieved three goals: (a) study the performance of extracting features from low-density EMG signals (8 channels), using the WST approach, (b) study the effect of extracting the features from high-density EEG signals (33 channels), using WST and study its robustness against changing the spatial and temporal aspects of classification accuracy, and (c) classify tremor severity using the WST method and compare the results with other well-known feature extraction approaches. The classification error rates were significantly reduced (maximum of nearly 12 %) compared with other feature sets
IMUs: validation, gait analysis and system’s implementation
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)Falls are a prevalent problem in actual society. The number of falls has been increasing greatly in the last fifteen years. Some falls result in injuries and the cost associated with their treatment is high. However, this is a complex problem that requires several steps in order to be tackled. Namely, it is crucial to develop strategies that recognize the mode of locomotion, indicating the state of the subject in various situations, namely normal gait, step before fall (pre-fall) and fall situation. Thus, this thesis aims to develop a strategy capable of identifying these situations based on a wearable system that collects information and analyses the human gait.
The strategy consists, essentially, in the construction and use of Associative Skill Memories (ASMs) as tools for recognizing the locomotion modes. Consequently, at an early stage, the capabilities of the ASMs for the different modes of locomotion were studied. Then, a classifier was developed based on a set of ASMs. Posteriorly, a neural network classifier based on deep learning was used to classify, in a similar way, the same modes of locomotion. Deep learning is a technique actually widely used in data classification. These classifiers were implemented and compared, providing for a tool with a good accuracy in recognizing the modes of locomotion.
In order to implement this strategy, it was previously necessary to carry out extremely important support work. An inertial measurement units’ (IMUs) system was chosen due to its extreme potential to monitor outpatient activities in the home environment. This system, which combines inertial and magnetic sensors and is able to perform the monitoring of gait parameters in real time, was validated and calibrated. Posteriorly, this system was used to collect data from healthy subjects that mimicked Fs.
Results have shown that the accuracy of the classifiers was quite acceptable, and the neural networks based classifier presented the best results with 92.71% of accuracy. As future work, it is proposed to apply these strategies in real time in order to avoid the occurrence of falls.As quedas são um problema predominante na sociedade atual. O número de quedas tem aumentado bastante nos últimos quinze anos. Algumas quedas resultam em lesões e o custo associado ao seu tratamento é alto. No entanto, trata-se de um problema complexo que requer várias etapas a serem abordadas. Ou seja, é crucial desenvolver estratégias que reconheçam o modo de locomoção, indicando o estado do sujeito em várias situações, nomeadamente, marcha normal, passo antes da queda (pré-queda) e situação de queda. Assim, esta tese tem como objetivo desenvolver uma estratégia capaz de identificar essas situações com base num sistema wearable que colete informações e analise a marcha humana.
A estratégia consiste, essencialmente, na construção e utilização de Associative Skill Memories (ASMs) como ferramenta para reconhecimento dos modos de locomoção. Consequentemente, numa fase inicial, foram estudadas as capacidades das ASMs para os diferentes modos de locomoção. Depois, foi desenvolvido um classificador baseado em ASMs. Posteriormente, um classificador de redes neuronais baseado em deep learning foi utilizado para classificar, de forma semelhante, os mesmos modos de locomoção. Deep learning é uma técnica bastante utilizada em classificação de dados. Estes classificadores foram implementados e comparados, fornecendo a uma ferramenta com uma boa precisão no reconhecimento dos modos de locomoção.
Para implementar esta estratégia, era necessário realizar previamente um trabalho de suporte extremamente importante. Um sistema de unidades de medição inercial (IMUs), foi escolhido devido ao seu potencial extremo para monitorizar as atividades ambulatórias no ambiente domiciliar. Este sistema que combina sensores inerciais e magnéticos e é capaz de efetuar a monitorização de parâmetros da marcha em tempo real, foi validado e calibrado. Posteriormente, este Sistema foi usado para adquirir dados da marcha de indivíduos saudáveis que imitiram quedas.
Os resultados mostraram que a precisão dos classificadores foi bastante aceitável e o classificador baseado em redes neuronais apresentou os melhores resultados com 92.71% de precisão. Como trabalho futuro, propõe-se a aplicação destas estratégias em tempo real de forma a evitar a ocorrência de quedas
Recommended from our members
Digital phenotyping through multimodal, unobtrusive sensing
The growing adoption of multimodal wearable and mobile devices, such as smartphones and wrist-worn watches has generated an increase in the collection of physiological and behavioural data at scale. This digital phenotyping data enables researchers to make inferences regarding users’ physical and mental health at scale, for the first time. However, translating this data into actionable insights requires computational approaches that turn unlabelled, multimodal time-series sensor data into validated measures that can be interpreted at scale.
This thesis describes the derivation of novel computational methods that leverage digital phenotyping data from wearable devices in large-scale populations to infer physical behaviours. These methods combine insights from signal processing, data mining and machine learning alongside domain knowledge in physical activity and sleep epidemiology. First, the inference of sleeping windows in free-living conditions through a heart rate sensing approach is explored. This algorithm is particularly valuable in the absence of ground truth or sleep diaries given its simplicity, adaptability and capacity for personalization. I then explore multistage sleep classification through combined movement and cardiac wearable sensing and machine learning. Further, I demonstrate that postural changes detected through wrist accelerometers can inform habitual behaviours and are valuable complements to traditional, intensity-based physical activity metrics. I then leverage the concomitant responses of heart rate to physical activity that can be captured through multimodal wearable sensors through a self-supervised training task. The resulting embeddings from this task are shown to be useful for the downstream classification of demographic factors, BMI, energy expenditure and cardiorespiratory fitness. Finally, I describe a deep learning model for the adaptive inference of cardiorespiratory fitness (VO2max) using wearable data in free living conditions. I demonstrate the robustness of the model in a large UK population and show the models’ adaptability by evaluating its performance in a subset of the population with repeated measures ~6 years after the original recordings.
Together, this work increases the potential of multimodal wearable and mobile sensors for physical activity and behavioural inferences in population studies. In particular, this thesis showcases the potential of using wearable devices to make valuable physical activity, sleep and fitness inferences in large cohort studies. Given the nature of the data collected and the fact that most of this data is currently generated by commercial providers and not research institutes, laying the foundations for responsible data governance and ethical use of these technologies will be critical to building trust and enabling the development of the field of digital phenotyping.I was funded by GlaxoSmithKline and the Engineering and Physical Sciences Research Council. I was also supported by the Alan Turing Institute through their Enrichment Scheme
Development and optimization of a low-cost myoelectric upper limb prosthesis
Tese de Mestrado Integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), 2022, Universidade de Lisboa, Faculdade de CiênciasIn recent years, the increase in the number of accidents, chronic diseases, such as diabetes, and
the impoverishment of certain developing countries have contributed to a significant increase in
prostheses users. The loss of a particular limb entails numerous changes in the daily life of each user,
which are amplified when the user loses their hand. Therefore, replacing the hand is an urgent necessity.
Developing upper limb prostheses will allow the re-establishment of the physical and motor functions
of the upper limb as well as reduction of the rates of depression. Therefore, the prosthetic industry has
been reinventing itself and evolving. It is already possible to control a prosthesis through the user's
myoelectric signals, control known as pattern recognition control. In addition, additive manufacturing
technologies such as 3D printing have gained strength in prosthetics. The use of this type of technology
allows the product to reach the user much faster and reduces the weight of the devices, making them
lighter. Despite these advances, the rejection rate of this type of device is still high since most prostheses
available on the market are slow, expensive and heavy. Because of that, academia and institutions have
been investigating ways to overcome these limitations. Nevertheless, the dependence on the number of
acquisition channels is still limiting since most users do not have a large available forearm surface area
to acquire the user’s myoelectric signals.
This work intends to solve some of these problems and answer the questions imposed by the
industry and researchers. The main objective is to test if developing a subject independent, fast and
simple microcontroller is possible. Subsequently, we recorded data from forty volunteers through the
BIOPAC acquisition system. After that, the signals were filtered through two different processes. The
first was digital filtering and the application of wavelet threshold noise reduction. Later, the signal was
divided into smaller windows (100 and 250 milliseconds) and thirteen features were extracted in the
temporal domain. During all these steps, the MatLab® software was used. After extraction, three feature
selection methods were used to optimize the classification process, where machine learning algorithms
are implemented. The classification was divided into different parts. First, the classifier had to
distinguish whether the volunteer was making some movement or was at rest. In the case of detected
movement, the classifier would have to, on a second level, try to understand if they were moving only
one finger or performing a movement that involved the flexion of more than one finger (grip). If the
volunteer was performing a grip on the third level, the classifier would have to identify whether the
volunteer was performing a spherical or triad grip. Finally, to understand the influence of the database
on the classification, two methods were used: cross-validation and split validation.
After analysing the results, the e-NABLE Unlimbited arm was printed on The Original Prusa i3
MK3, where polylactic acid (PLA) was used.
This dissertation showed that the results obtained in the 250-millisecond window were better than
the obtained ones in a 100-millisecond window. In general, the best classifier was the K-Nearest
Neighbours (KNN) with k=2, except for the first level that was LDA. The best results were obtained for
the first classification level, with an accuracy greater than 90%. Although the results obtained for the
second and third levels were close to 80%, it was concluded that it was impossible to develop a
microcontroller dependent only on one acquisition channel. These results agree with the anatomical
characteristics since they are originated from the same muscle group. The cross-validation results were
lower than those obtained in the training-test methodology, which allowed us to conclude that the inter variability that exists between the subjects significantly affects the classification performance.
Furthermore, both the dominant and non-dominant arms were used in this work, which also increased
the discrepancy between signals. Indeed, the results showed that it is impossible to develop a
microcontroller adaptable to all users. Therefore, in the future, the best path will be to opt for the
customization of the prototype. In order to test the implementation of a microcontroller in the printed model, it was necessary to design a support structure in Solidworks that would support the motors used
to flex the fingers and Arduino to control the motors. Consequently, the e-NABLE model was re adapted, making it possible to develop a clinical training prototype. Even though it is a training
prototype, it is lighter than those on the market and cheaper.
The objectives of this work have been fulfilled and many answers have been given. However,
there is always space for improvement. Although, this dissertation has some limitations, it certainly
contributed to clarify many of the doubts that still exist in the scientific community. Hopefully, it will
help to further develop the prosthetic industry.Nos últimos anos, o aumento do número de acidentes por doenças crónicas, como, por exemplo,
a diabetes, e o empobrecimento de determinados países em desenvolvimento têm contribuído para um
aumento significativo no número de utilizadores de próteses. A perda de um determinado membro
acarreta inúmeras mudanças no dia-a-dia de cada utilizador. Estas são amplificadas quando a perda é
referente à mão ou parte do antebraço. A mão é uma ferramenta essencial no dia-a-dia de cada ser
humano, uma vez que é através dela que são realizadas as atividades básicas, como, por exemplo, tomar
banho, lavar os dentes, comer, preparar refeições, etc. A substituição desta ferramenta é, portanto, uma
necessidade, não só porque permitirá restabelecer as funções físicas e motoras do membro superior,
como, também, reduzirá o nível de dependência destes utilizadores de outrem e, consequentemente, das
taxas de depressão. Para colmatar as necessidades dos utilizadores, a indústria prostética tem-se
reinventado e evoluído, desenvolvendo próteses para o membro superior cada vez mais sofisticadas.
Com efeito, já é possível controlar uma prótese através da leitura e análise dos sinais mioelétricos do
próprio utilizador, o que é denominado por muitos investigadores de controlo por reconhecimento de
padrões. Este tipo de controlo é personalizável e permite adaptar a prótese a cada utilizador. Para além
do uso de sinais elétricos provenientes do musculo do utilizador, a impressão 3D, uma técnica de
manufatura aditiva, têm ganho força no campo da prostética. Por conseguinte, nos últimos anos os
investigadores têm impresso inúmeros modelos com diferentes materiais que vão desde o uso de
termoplásticos, ao uso de materiais flexíveis. A utilização deste tipo de tecnologia permite, para além
de uma rápida entrega do produto ao utilizador, uma diminuição no tempo de construção de uma prótese
tornando-a mais leve e barata. Além do mais, a impressão 3D permite criar protótipos mais sustentáveis,
uma vez que existe uma redução na quantidade de material desperdiçado. Embora já existam inúmeras
soluções, a taxa de rejeição deste tipo de dispositivos é ainda bastante elevada, uma vez que a maioria
das próteses disponíveis no mercado, nomeadamente as mioelétricas, são lentas, caras e pesadas. Ainda
que existam alguns estudos que se debrucem neste tipo de tecnologias, bem como na sua evolução
científica, o número de elétrodos utilizados é ainda significativo. Desta forma, e, tendo em conta que a
maioria dos utilizadores não possuí uma área de superfície do antebraço suficiente para ser feita a
aquisição dos sinais mioelétricos, o trabalho feito pela academia não se revelou tão contributivo para a
indústria prostética como este prometia inicialmente.
Este trabalho pretende resolver alguns desses problemas e responder às questões mais impostas
pela indústria e investigadores, para que, no futuro, o número de utilizadores possa aumentar, assim
como o seu índice de satisfação relativamente ao produto. Para tal, recolheram-se os sinais mioelétricos
de quarenta voluntários, através do sistema de aquisição BIOPAC. Após a recolha, filtraram-se os sinais
de seis voluntários através de dois processos diferentes. No primeiro, utilizaram-se filtros digitais e no
segundo aplicou-se a transformada de onda para a redução do ruído. De seguida, o sinal foi segmentado
em janelas mais pequenas de 100 e 250 milissegundos e extraíram-se treze features no domínio temporal.
Para que o processo de classificação fosse otimizado, foram aplicados três métodos de seleção de
features. A classificação foi dividida em três níveis diferentes nos quais dois algoritmos de
aprendizagem automática foram implementados, individualmente. No primeiro nível, o objetivo foi a
distinção entre os momentos em que o voluntário fazia movimento ou que estava em repouso. Caso o
output do classificador fosse a classe movimento, este teria de, num segundo nível, tentar perceber se o
voluntário estaria a mexer apenas um dedo ou a realizar um movimento que envolvesse a flexão de mais
de que um dedo (preensão). No caso de uma preensão, passava-se ao terceiro nível onde o classificador
teria de identificar se o voluntário estaria a realizar a preensão esférica ou em tríade. Para todos os níveis
de classificação, obtiveram-se resultados para o método de validação cruzada e o método de teste e
treino, sendo que neste, 70% dos dados foram utilizados como conjunto de treino e 30% como teste.
Efetuada a análise dos resultados, escolheu-se um dos modelos da comunidade e-NABLE. O modelo foi
impresso na impressora The Original Prusa i3 MK3S e o material escolhido foi o ácido poliláctico
(PLA). Para que fosse possível testar a implementação de um microcontrolador num modelo que
originalmente depende da flexão do cotovelo realizada pelo utilizador, foi necessário desenhar uma
estrutura de suporte que suportasse, não só os motores utilizados para flexionar os dedos, como, também,
o Arduíno. O suporte desenhado foi impresso com o mesmo material e com a mesma impressora.
Os resultados obtidos mostraram que a janela de 250 milissegundo foi a melhor e que, regra geral,
o melhor classificador é o K-Nearest Neighbors (KNN) com k=2, com exceção do primeiro nível, em
que o melhor classificador foi o Linear Discriminant Analysis (LDA). Os melhores resultados
obtiveram-se no primeiro nível de classificação onde a accuracy foi superior a 90%. Embora os
resultados obtidos para o segundo e terceiro nível tenham sido próximos de 80%, concluiu-se que não
era possível desenvolver um microcontrolador dependente apenas de um canal de aquisição. Tal era
expectável, uma vez que os movimentos estudados são originados pelo mesmo grupo muscular e a
intervariabilidade dos sujeitos um fator significativo. Os resultados da validação cruzada foram menos
precisos do que os obtidos para a metodologia de treino-teste, o que permitiu concluir que a
intervariabilidade existente entre os voluntários afeta significativamente o processo de classificação.
Para além disso, os voluntários utilizaram o braço dominante e o braço não dominante, o que acabou
por aumentar a discrepância entre os sinais recolhidos. Com efeito, os resultados mostraram que não é
possível desenvolver um microcontrolador que seja adaptável a todos os utilizadores e, portanto, no
futuro, o melhor caminho será optar pela personalização do protótipo. Tendo o conhecimento prévio
desta evidência, o protótipo desenvolvido neste trabalho apenas servirá como protótipo de treino para o
utilizador. Ainda assim, este é bem mais leve que os existentes no mercado e muito mais barato. Nele é
ainda possível testar e controlar alguns dos componentes que no futuro irão fazer parte da prótese
completa, prevenindo acidentes.
Não obstante o cumprimento dos objetivos deste trabalho e das muitas respostas que por ele foram
dadas, existe sempre espaço para melhorias. Dado à limitação de tempo, não foi possível testar o
microcontrolador em tempo-real nem efetuar testes mecânicos de flexibilidade e resistência dos
materiais da prótese. Deste modo, seria interessante no futuro fazer testes de performance em tempo real
e submeter a prótese a condições extremas, para que a tensão elástica e a tensão dos pins sejam testadas.
Para além disso, testar os mecanismos de segurança da prótese quando o utilizador tem de fazer muita
força é fundamental. O teste destes parâmetros evitará a ocorrência de falhas que poderão magoar o
utilizador, bem como estragar os objetos com os quais a prótese poderá interagir. Por fim, é necessário
melhorar o aspeto cosmético das próteses. Para que isso aconteça, poderão ser utilizados polímeros com
uma coloração próxima do tom da pele do utilizador. Uma outra forma de melhorar este aspeto, seria
fazer o scanning do braço saudável do utilizador e usar materiais flexíveis para as articulações e dedos
que, juntamente com uma palma de termoplásticos resistentes e um microcontrolador, permitissem um
movimento bastante natural próximo do biológico.
Em suma, apesar de algumas limitações, este trabalho contribuiu para o esclarecimento de muitas
das dúvidas que ainda existiam na comunidade científica e ajudará a desenvolver a indústria prostética
The effect of attentional focus instructions on single leg balance performance
Balance and postural control exercises are often a part of exercise programs. During exercise programs, movement practitioners can provide instructions to facilitate performance and learning. Instructions can be used to direct attentional focus, which has been found to affect the performance and learning of motor skills, including balance and postural control tasks. However, no known studies to date have investigated the effect of both internal and external attentional focus instructions on static single leg balance performance. The purpose of this study was to investigate the effect of attentional focus instructions on static single leg balance performance as reflected by the complexity of the center of pressure (COP) profile. Data from forty-six participants between the ages of 19-28 years old were analyzed. Participants were divided into three groups: internal focus (INT) (n=15), external focus (EXT) (n=16) and control (CON) (n=15). Participants performed a thirty-five second static single leg balance task. Prior to the balance task, instructions were provided to participants which differed in the direction of attentional focus (internal or external focus), and the control group did not receive specific attentional focus instructions. Outcome measures were the scaling exponent determined from a detrended fluctuation analysis (DFA) to infer complexity of the COP profile in the anteriorposterior (AP) and medial-lateral (ML) directions, and root mean square error (RMSE) of the COP profile in AP and ML directions. A one-way analysis of variance (ANOVA) determined there were no statistically significant differences in the measured variables among groups. The results did not support the claim that manipulating the direction of attentional focus affects static single leg balance performance
On the Recognition of Emotion from Physiological Data
This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure
- …