746 research outputs found

    Low-power neuromorphic sensor fusion for elderly care

    Get PDF
    Smart wearable systems have become a necessary part of our daily life with applications ranging from entertainment to healthcare. In the wearable healthcare domain, the development of wearable fall recognition bracelets based on embedded systems is getting considerable attention in the market. However, in embedded low-power scenarios, the sensor’s signal processing has propelled more challenges for the machine learning algorithm. Traditional machine learning method has a huge number of calculations on the data classification, and it is difficult to implement real-time signal processing in low-power embedded systems. In an embedded system, ensuring data classification in a low-power and real-time processing to fuse a variety of sensor signals is a huge challenge. This requires the introduction of neuromorphic computing with software and hardware co-design concept of the system. This thesis is aimed to review various neuromorphic computing algorithms, research hardware circuits feasibility, and then integrate captured sensor data to realise data classification applications. In addition, it has explored a human being benchmark dataset, which is following defined different levels to design the activities classification task. In this study, firstly the data classification algorithm is applied to human movement sensors to validate the neuromorphic computing on human activity recognition tasks. Secondly, a data fusion framework has been presented, it implements multiple-sensing signals to help neuromorphic computing achieve sensor fusion results and improve classification accuracy. Thirdly, an analog circuits module design to carry out a neural network algorithm to achieve low power and real-time processing hardware has been proposed. It shows a hardware/software co-design system to combine the above work. By adopting the multi-sensing signals on the embedded system, the designed software-based feature extraction method will help to fuse various sensors data as an input to help neuromorphic computing hardware. Finally, the results show that the classification accuracy of neuromorphic computing data fusion framework is higher than that of traditional machine learning and deep neural network, which can reach 98.9% accuracy. Moreover, this framework can flexibly combine acquisition hardware signals and is not limited to single sensor data, and can use multi-sensing information to help the algorithm obtain better stability

    Hardware-Based Hopfield Neuromorphic Computing for Fall Detection

    Get PDF
    With the popularity of smart wearable systems, sensor signal processing poses more challenges to machine learning in embedded scenarios. For example, traditional machine-learning methods for data classification, especially in real time, are computationally intensive. The deployment of Artificial Intelligence algorithms on embedded hardware for fast data classification and accurate fall detection poses a huge challenge in achieving power-efficient embedded systems. Therefore, by exploiting the associative memory feature of Hopfield Neural Network, a hardware module has been designed to simulate the Neural Network algorithm which uses sensor data integration and data classification for recognizing the fall. By adopting the Hebbian learning method for training neural networks, weights of human activity features are obtained and implemented/embedded into the hardware design. Here, the neural network weight of fall activity is achieved through data preprocessing, and then the weight is mapped to the amplification factor setting in the hardware. The designs are checked with validation scenarios, and the experiment is completed with a Hopfield neural network in the analog module. Through simulations, the classification accuracy of the fall data reached 88.9% which compares well with some other results achieved by the software-based machine-learning algorithms, which verify the feasibility of our hardware design. The designed system performs the complex signal calculations of the hardware’s feedback signal, replacing the software-based method. A straightforward circuit design is used to meet the weight setting from the Hopfield neural network, which is maximizing the reusability and flexibility of the circuit design

    IMU sensing–based Hopfield neuromorphic computing for human activity recognition

    Get PDF
    Aiming at the self-association feature of the Hopfield neural network, we can reduce the need for extensive sensor training samples during human behavior recognition. For a training algorithm to obtain a general activity feature template with only one time data preprocessing, this work proposes a data preprocessing framework that is suitable for neuromorphic computing. Based on the preprocessing method of the construction matrix and feature extraction, we achieved simplification and improvement in the classification of output of the Hopfield neuromorphic algorithm. We assigned different samples to neurons by constructing a feature matrix, which changed the weights of different categories to classify sensor data. Meanwhile, the preprocessing realizes the sensor data fusion process, which helps improve the classification accuracy and avoids falling into the local optimal value caused by single sensor data. Experimental results show that the framework has high classification accuracy with necessary robustness. Using the proposed method, the classification and recognition accuracy of the Hopfield neuromorphic algorithm on the three classes of human activities is 96.3%. Compared with traditional machine learning algorithms, the proposed framework only requires learning samples once to get the feature matrix for human activities, complementing the limited sample databases while improving the classification accuracy

    Multi-sensor data fusion in mobile devices for the identification of Activities of Daily Living

    Get PDF
    Following the recent advances in technology and the growing use of mobile devices such as smartphones, several solutions may be developed to improve the quality of life of users in the context of Ambient Assisted Living (AAL). Mobile devices have different available sensors, e.g., accelerometer, gyroscope, magnetometer, microphone and Global Positioning System (GPS) receiver, which allow the acquisition of physical and physiological parameters for the recognition of different Activities of Daily Living (ADL) and the environments in which they are performed. The definition of ADL includes a well-known set of tasks, which include basic selfcare tasks, based on the types of skills that people usually learn in early childhood, including feeding, bathing, dressing, grooming, walking, running, jumping, climbing stairs, sleeping, watching TV, working, listening to music, cooking, eating and others. On the context of AAL, some individuals (henceforth called user or users) need particular assistance, either because the user has some sort of impairment, or because the user is old, or simply because users need/want to monitor their lifestyle. The research and development of systems that provide a particular assistance to people is increasing in many areas of application. In particular, in the future, the recognition of ADL will be an important element for the development of a personal digital life coach, providing assistance to different types of users. To support the recognition of ADL, the surrounding environments should be also recognized to increase the reliability of these systems. The main focus of this Thesis is the research on methods for the fusion and classification of the data acquired by the sensors available in off-the-shelf mobile devices in order to recognize ADL in almost real-time, taking into account the large diversity of the capabilities and characteristics of the mobile devices available in the market. In order to achieve this objective, this Thesis started with the review of the existing methods and technologies to define the architecture and modules of the method for the identification of ADL. With this review and based on the knowledge acquired about the sensors available in off-the-shelf mobile devices, a set of tasks that may be reliably identified was defined as a basis for the remaining research and development to be carried out in this Thesis. This review also identified the main stages for the development of a new method for the identification of the ADL using the sensors available in off-the-shelf mobile devices; these stages are data acquisition, data processing, data cleaning, data imputation, feature extraction, data fusion and artificial intelligence. One of the challenges is related to the different types of data acquired from the different sensors, but other challenges were found, including the presence of environmental noise, the positioning of the mobile device during the daily activities, the limited capabilities of the mobile devices and others. Based on the acquired data, the processing was performed, implementing data cleaning and feature extraction methods, in order to define a new framework for the recognition of ADL. The data imputation methods were not applied, because at this stage of the research their implementation does not have influence in the results of the identification of the ADL and environments, as the features are extracted from a set of data acquired during a defined time interval and there are no missing values during this stage. The joint selection of the set of usable sensors and the identifiable set of tasks will then allow the development of a framework that, considering multi-sensor data fusion technologies and context awareness, in coordination with other information available from the user context, such as his/her agenda and the time of the day, will allow to establish a profile of the tasks that the user performs in a regular activity day. The classification method and the algorithm for the fusion of the features for the recognition of ADL and its environments needs to be deployed in a machine with some computational power, while the mobile device that will use the created framework, can perform the identification of the ADL using a much less computational power. Based on the results reported in the literature, the method chosen for the recognition of the ADL is composed by three variants of Artificial Neural Networks (ANN), including simple Multilayer Perceptron (MLP) networks, Feedforward Neural Networks (FNN) with Backpropagation, and Deep Neural Networks (DNN). Data acquisition can be performed with standard methods. After the acquisition, the data must be processed at the data processing stage, which includes data cleaning and feature extraction methods. The data cleaning method used for motion and magnetic sensors is the low pass filter, in order to reduce the noise acquired; but for the acoustic data, the Fast Fourier Transform (FFT) was applied to extract the different frequencies. When the data is clean, several features are then extracted based on the types of sensors used, including the mean, standard deviation, variance, maximum value, minimum value and median of raw data acquired from the motion and magnetic sensors; the mean, standard deviation, variance and median of the maximum peaks calculated with the raw data acquired from the motion and magnetic sensors; the five greatest distances between the maximum peaks calculated with the raw data acquired from the motion and magnetic sensors; the mean, standard deviation, variance, median and 26 Mel- Frequency Cepstral Coefficients (MFCC) of the frequencies obtained with FFT based on the raw data acquired from the microphone data; and the distance travelled calculated with the data acquired from the GPS receiver. After the extraction of the features, these will be grouped in different datasets for the application of the ANN methods and to discover the method and dataset that reports better results. The classification stage was incrementally developed, starting with the identification of the most common ADL (i.e., walking, running, going upstairs, going downstairs and standing activities) with motion and magnetic sensors. Next, the environments were identified with acoustic data, i.e., bedroom, bar, classroom, gym, kitchen, living room, hall, street and library. After the environments are recognized, and based on the different sets of sensors commonly available in the mobile devices, the data acquired from the motion and magnetic sensors were combined with the recognized environment in order to differentiate some activities without motion, i.e., sleeping and watching TV. The number of recognized activities in this stage was increased with the use of the distance travelled, extracted from the GPS receiver data, allowing also to recognize the driving activity. After the implementation of the three classification methods with different numbers of iterations, datasets and remaining configurations in a machine with high processing capabilities, the reported results proved that the best method for the recognition of the most common ADL and activities without motion is the DNN method, but the best method for the recognition of environments is the FNN method with Backpropagation. Depending on the number of sensors used, this implementation reports a mean accuracy between 85.89% and 89.51% for the recognition of the most common ADL, equals to 86.50% for the recognition of environments, and equals to 100% for the recognition of activities without motion, reporting an overall accuracy between 85.89% and 92.00%. The last stage of this research work was the implementation of the structured framework for the mobile devices, verifying that the FNN method requires a high processing power for the recognition of environments and the results reported with the mobile application are lower than the results reported with the machine with high processing capabilities used. Thus, the DNN method was also implemented for the recognition of the environments with the mobile devices. Finally, the results reported with the mobile devices show an accuracy between 86.39% and 89.15% for the recognition of the most common ADL, equal to 45.68% for the recognition of environments, and equal to 100% for the recognition of activities without motion, reporting an overall accuracy between 58.02% and 89.15%. Compared with the literature, the results returned by the implemented framework show only a residual improvement. However, the results reported in this research work comprehend the identification of more ADL than the ones described in other studies. The improvement in the recognition of ADL based on the mean of the accuracies is equal to 2.93%, but the maximum number of ADL and environments previously recognized was 13, while the number of ADL and environments recognized with the framework resulting from this research is 16. In conclusion, the framework developed has a mean improvement of 2.93% in the accuracy of the recognition for a larger number of ADL and environments than previously reported. In the future, the achievements reported by this PhD research may be considered as a start point of the development of a personal digital life coach, but the number of ADL and environments recognized by the framework should be increased and the experiments should be performed with different types of devices (i.e., smartphones and smartwatches), and the data imputation and other machine learning methods should be explored in order to attempt to increase the reliability of the framework for the recognition of ADL and its environments.Após os recentes avanços tecnológicos e o crescente uso dos dispositivos móveis, como por exemplo os smartphones, várias soluções podem ser desenvolvidas para melhorar a qualidade de vida dos utilizadores no contexto de Ambientes de Vida Assistida (AVA) ou Ambient Assisted Living (AAL). Os dispositivos móveis integram vários sensores, tais como acelerómetro, giroscópio, magnetómetro, microfone e recetor de Sistema de Posicionamento Global (GPS), que permitem a aquisição de vários parâmetros físicos e fisiológicos para o reconhecimento de diferentes Atividades da Vida Diária (AVD) e os seus ambientes. A definição de AVD inclui um conjunto bem conhecido de tarefas que são tarefas básicas de autocuidado, baseadas nos tipos de habilidades que as pessoas geralmente aprendem na infância. Essas tarefas incluem alimentar-se, tomar banho, vestir-se, fazer os cuidados pessoais, caminhar, correr, pular, subir escadas, dormir, ver televisão, trabalhar, ouvir música, cozinhar, comer, entre outras. No contexto de AVA, alguns indivíduos (comumente chamados de utilizadores) precisam de assistência particular, seja porque o utilizador tem algum tipo de deficiência, seja porque é idoso, ou simplesmente porque o utilizador precisa/quer monitorizar e treinar o seu estilo de vida. A investigação e desenvolvimento de sistemas que fornecem algum tipo de assistência particular está em crescente em muitas áreas de aplicação. Em particular, no futuro, o reconhecimento das AVD é uma parte importante para o desenvolvimento de um assistente pessoal digital, fornecendo uma assistência pessoal de baixo custo aos diferentes tipos de pessoas. pessoas. Para ajudar no reconhecimento das AVD, os ambientes em que estas se desenrolam devem ser reconhecidos para aumentar a fiabilidade destes sistemas. O foco principal desta Tese é o desenvolvimento de métodos para a fusão e classificação dos dados adquiridos a partir dos sensores disponíveis nos dispositivos móveis, para o reconhecimento quase em tempo real das AVD, tendo em consideração a grande diversidade das características dos dispositivos móveis disponíveis no mercado. Para atingir este objetivo, esta Tese iniciou-se com a revisão dos métodos e tecnologias existentes para definir a arquitetura e os módulos do novo método de identificação das AVD. Com esta revisão da literatura e com base no conhecimento adquirido sobre os sensores disponíveis nos dispositivos móveis disponíveis no mercado, um conjunto de tarefas que podem ser identificadas foi definido para as pesquisas e desenvolvimentos desta Tese. Esta revisão também identifica os principais conceitos para o desenvolvimento do novo método de identificação das AVD, utilizando os sensores, são eles: aquisição de dados, processamento de dados, correção de dados, imputação de dados, extração de características, fusão de dados e extração de resultados recorrendo a métodos de inteligência artificial. Um dos desafios está relacionado aos diferentes tipos de dados adquiridos pelos diferentes sensores, mas outros desafios foram encontrados, sendo os mais relevantes o ruído ambiental, o posicionamento do dispositivo durante a realização das atividades diárias, as capacidades limitadas dos dispositivos móveis. As diferentes características das pessoas podem igualmente influenciar a criação dos métodos, escolhendo pessoas com diferentes estilos de vida e características físicas para a aquisição e identificação dos dados adquiridos a partir de sensores. Com base nos dados adquiridos, realizou-se o processamento dos dados, implementando-se métodos de correção dos dados e a extração de características, para iniciar a criação do novo método para o reconhecimento das AVD. Os métodos de imputação de dados foram excluídos da implementação, pois não iriam influenciar os resultados da identificação das AVD e dos ambientes, na medida em que são utilizadas as características extraídas de um conjunto de dados adquiridos durante um intervalo de tempo definido. A seleção dos sensores utilizáveis, bem como das AVD identificáveis, permitirá o desenvolvimento de um método que, considerando o uso de tecnologias para a fusão de dados adquiridos com múltiplos sensores em coordenação com outras informações relativas ao contexto do utilizador, tais como a agenda do utilizador, permitindo estabelecer um perfil de tarefas que o utilizador realiza diariamente. Com base nos resultados obtidos na literatura, o método escolhido para o reconhecimento das AVD são as diferentes variantes das Redes Neuronais Artificiais (RNA), incluindo Multilayer Perceptron (MLP), Feedforward Neural Networks (FNN) with Backpropagation and Deep Neural Networks (DNN). No final, após a criação dos métodos para cada fase do método para o reconhecimento das AVD e ambientes, a implementação sequencial dos diferentes métodos foi realizada num dispositivo móvel para testes adicionais. Após a definição da estrutura do método para o reconhecimento de AVD e ambientes usando dispositivos móveis, verificou-se que a aquisição de dados pode ser realizada com os métodos comuns. Após a aquisição de dados, os mesmos devem ser processados no módulo de processamento de dados, que inclui os métodos de correção de dados e de extração de características. O método de correção de dados utilizado para sensores de movimento e magnéticos é o filtro passa-baixo de modo a reduzir o ruído, mas para os dados acústicos, a Transformada Rápida de Fourier (FFT) foi aplicada para extrair as diferentes frequências. Após a correção dos dados, as diferentes características foram extraídas com base nos tipos de sensores usados, sendo a média, desvio padrão, variância, valor máximo, valor mínimo e mediana de dados adquiridos pelos sensores magnéticos e de movimento, a média, desvio padrão, variância e mediana dos picos máximos calculados com base nos dados adquiridos pelos sensores magnéticos e de movimento, as cinco maiores distâncias entre os picos máximos calculados com os dados adquiridos dos sensores de movimento e magnéticos, a média, desvio padrão, variância e 26 Mel-Frequency Cepstral Coefficients (MFCC) das frequências obtidas com FFT com base nos dados obtidos a partir do microfone, e a distância calculada com os dados adquiridos pelo recetor de GPS. Após a extração das características, as mesmas são agrupadas em diferentes conjuntos de dados para a aplicação dos métodos de RNA de modo a descobrir o método e o conjunto de características que reporta melhores resultados. O módulo de classificação de dados foi incrementalmente desenvolvido, começando com a identificação das AVD comuns com sensores magnéticos e de movimento, i.e., andar, correr, subir escadas, descer escadas e parado. Em seguida, os ambientes são identificados com dados de sensores acústicos, i.e., quarto, bar, sala de aula, ginásio, cozinha, sala de estar, hall, rua e biblioteca. Com base nos ambientes reconhecidos e os restantes sensores disponíveis nos dispositivos móveis, os dados adquiridos dos sensores magnéticos e de movimento foram combinados com o ambiente reconhecido para diferenciar algumas atividades sem movimento (i.e., dormir e ver televisão), onde o número de atividades reconhecidas nesta fase aumenta com a fusão da distância percorrida, extraída a partir dos dados do recetor GPS, permitindo também reconhecer a atividade de conduzir. Após a implementação dos três métodos de classificação com diferentes números de iterações, conjuntos de dados e configurações numa máquina com alta capacidade de processamento, os resultados relatados provaram que o melhor método para o reconhecimento das atividades comuns de AVD e atividades sem movimento é o método DNN, mas o melhor método para o reconhecimento de ambientes é o método FNN with Backpropagation. Dependendo do número de sensores utilizados, esta implementação reporta uma exatidão média entre 85,89% e 89,51% para o reconhecimento das AVD comuns, igual a 86,50% para o reconhecimento de ambientes, e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão global entre 85,89% e 92,00%. A última etapa desta Tese foi a implementação do método nos dispositivos móveis, verificando que o método FNN requer um alto poder de processamento para o reconhecimento de ambientes e os resultados reportados com estes dispositivos são inferiores aos resultados reportados com a máquina com alta capacidade de processamento utilizada no desenvolvimento do método. Assim, o método DNN foi igualmente implementado para o reconhecimento dos ambientes com os dispositivos móveis. Finalmente, os resultados relatados com os dispositivos móveis reportam uma exatidão entre 86,39% e 89,15% para o reconhecimento das AVD comuns, igual a 45,68% para o reconhecimento de ambientes, e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão geral entre 58,02% e 89,15%. Com base nos resultados relatados na literatura, os resultados do método desenvolvido mostram uma melhoria residual, mas os resultados desta Tese identificam mais AVD que os demais estudos disponíveis na literatura. A melhoria no reconhecimento das AVD com base na média das exatidões é igual a 2,93%, mas o número máximo de AVD e ambientes reconhecidos pelos estudos disponíveis na literatura é 13, enquanto o número de AVD e ambientes reconhecidos com o método implementado é 16. Assim, o método desenvolvido tem uma melhoria de 2,93% na exatidão do reconhecimento num maior número de AVD e ambientes. Como trabalho futuro, os resultados reportados nesta Tese podem ser considerados um ponto de partida para o desenvolvimento de um assistente digital pessoal, mas o número de ADL e ambientes reconhecidos pelo método deve ser aumentado e as experiências devem ser repetidas com diferentes tipos de dispositivos móveis (i.e., smartphones e smartwatches), e os métodos de imputação e outros métodos de classificação de dados devem ser explorados de modo a tentar aumentar a confiabilidade do método para o reconhecimento das AVD e ambientes

    IMUs: validation, gait analysis and system’s implementation

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)Falls are a prevalent problem in actual society. The number of falls has been increasing greatly in the last fifteen years. Some falls result in injuries and the cost associated with their treatment is high. However, this is a complex problem that requires several steps in order to be tackled. Namely, it is crucial to develop strategies that recognize the mode of locomotion, indicating the state of the subject in various situations, namely normal gait, step before fall (pre-fall) and fall situation. Thus, this thesis aims to develop a strategy capable of identifying these situations based on a wearable system that collects information and analyses the human gait. The strategy consists, essentially, in the construction and use of Associative Skill Memories (ASMs) as tools for recognizing the locomotion modes. Consequently, at an early stage, the capabilities of the ASMs for the different modes of locomotion were studied. Then, a classifier was developed based on a set of ASMs. Posteriorly, a neural network classifier based on deep learning was used to classify, in a similar way, the same modes of locomotion. Deep learning is a technique actually widely used in data classification. These classifiers were implemented and compared, providing for a tool with a good accuracy in recognizing the modes of locomotion. In order to implement this strategy, it was previously necessary to carry out extremely important support work. An inertial measurement units’ (IMUs) system was chosen due to its extreme potential to monitor outpatient activities in the home environment. This system, which combines inertial and magnetic sensors and is able to perform the monitoring of gait parameters in real time, was validated and calibrated. Posteriorly, this system was used to collect data from healthy subjects that mimicked Fs. Results have shown that the accuracy of the classifiers was quite acceptable, and the neural networks based classifier presented the best results with 92.71% of accuracy. As future work, it is proposed to apply these strategies in real time in order to avoid the occurrence of falls.As quedas são um problema predominante na sociedade atual. O número de quedas tem aumentado bastante nos últimos quinze anos. Algumas quedas resultam em lesões e o custo associado ao seu tratamento é alto. No entanto, trata-se de um problema complexo que requer várias etapas a serem abordadas. Ou seja, é crucial desenvolver estratégias que reconheçam o modo de locomoção, indicando o estado do sujeito em várias situações, nomeadamente, marcha normal, passo antes da queda (pré-queda) e situação de queda. Assim, esta tese tem como objetivo desenvolver uma estratégia capaz de identificar essas situações com base num sistema wearable que colete informações e analise a marcha humana. A estratégia consiste, essencialmente, na construção e utilização de Associative Skill Memories (ASMs) como ferramenta para reconhecimento dos modos de locomoção. Consequentemente, numa fase inicial, foram estudadas as capacidades das ASMs para os diferentes modos de locomoção. Depois, foi desenvolvido um classificador baseado em ASMs. Posteriormente, um classificador de redes neuronais baseado em deep learning foi utilizado para classificar, de forma semelhante, os mesmos modos de locomoção. Deep learning é uma técnica bastante utilizada em classificação de dados. Estes classificadores foram implementados e comparados, fornecendo a uma ferramenta com uma boa precisão no reconhecimento dos modos de locomoção. Para implementar esta estratégia, era necessário realizar previamente um trabalho de suporte extremamente importante. Um sistema de unidades de medição inercial (IMUs), foi escolhido devido ao seu potencial extremo para monitorizar as atividades ambulatórias no ambiente domiciliar. Este sistema que combina sensores inerciais e magnéticos e é capaz de efetuar a monitorização de parâmetros da marcha em tempo real, foi validado e calibrado. Posteriormente, este Sistema foi usado para adquirir dados da marcha de indivíduos saudáveis que imitiram quedas. Os resultados mostraram que a precisão dos classificadores foi bastante aceitável e o classificador baseado em redes neuronais apresentou os melhores resultados com 92.71% de precisão. Como trabalho futuro, propõe-se a aplicação destas estratégias em tempo real de forma a evitar a ocorrência de quedas

    Lifelog access modelling using MemoryMesh

    Get PDF
    As of very recently, we have observed a convergence of technologies that have led to the emergence of lifelogging as a technology for personal data application. Lifelogging will become ubiquitous in the near future, not just for memory enhancement and health management, but also in various other domains. While there are many devices available for gathering massive lifelogging data, there are still challenges to modelling large volume of multi-modal lifelog data. In the thesis, we explore and address the problem of how to model lifelog in order to make personal lifelogs more accessible to users from the perspective of collection, organization and visualization. In order to subdivide our research targets, we designed and followed the following steps to solve the problem: 1. Lifelog activity recognition. We use multiple sensor data to analyse various daily life activities. Data ranges from accelerometer data collected by mobile phones to images captured by wearable cameras. We propose a semantic, density-based algorithm to cope with concept selection issues for lifelogging sensory data. 2. Visual discovery of lifelog images. Most of the lifelog information we takeeveryday is in a form of images, so images contain significant information about our lives. Here we conduct some experiments on visual content analysis of lifelog images, which includes both image contents and image meta data. 3. Linkage analysis of lifelogs. By exploring linkage analysis of lifelog data, we can connect all lifelog images using linkage models into a concept called the MemoryMesh. The thesis includes experimental evaluations using real-life data collected from multiple users and shows the performance of our algorithms in detecting semantics of daily-life concepts and their effectiveness in activity recognition and lifelog retrieval

    Physical Human Activity Recognition Using Machine Learning Algorithms

    Get PDF
    With the rise in ubiquitous computing, the desire to make everyday lives smarter and easier with technology is on the increase. Human activity recognition (HAR) is the outcome of a similar motive. HAR enables a wide range of pervasive computing applications by recognizing the activity performed by a user. In order to contribute to the multi facet applications that HAR is capable to offer, predicting the right activity is of utmost importance. Simplest of the issues as the use of incorrect data manipulation or utilizing a wrong algorithm to perform prediction can hinder the performance of a HAR system. This study is designed to perform HAR by using two dimensionality reduction techniques followed by five different supervised machine learning algorithms as an aim to receive better predictive accuracy over the existing benchmark research. Correlation analysis (CA) and Principal component analysis (PCA) are used for feature reduction which resulted in 173 and 100 features respectively. Decision Tree, K Nearest Neighbor, Naive Bayes, Multinomial Logistic Regression and Artificial Neural Network algorithms were used to perform the classification task. The repeated random sub-sampling cross validation technique was used to perform the evaluation followed by a Wilcoxon signed rank test to evaluate the significance of the tests. The study resulted in ANN performing the best classification by achieving 97% of accuracy using the CA as feature reduction technique. The KNN and LR also provided satisfactory results and have received predictive results greater than the benchmark test. However, the decision tree and Naive bayes algorithms didn’t prove efficient

    FootApp: An AI-powered system for football match annotation

    Get PDF
    In the last years, scientific and industrial research has experienced a growing interest in acquiring large annotated data sets to train artificial intelligence algorithms for tackling problems in different domains. In this context, we have observed that even the market for football data has substantially grown. The analysis of football matches relies on the annotation of both individual players’ and team actions, as well as the athletic performance of players. Consequently, annotating football events at a fine-grained level is a very expensive and error-prone task. Most existing semi-automatic tools for football match annotation rely on cameras and computer vision. However, those tools fall short in capturing team dynamics and in extracting data of players who are not visible in the camera frame. To address these issues, in this manuscript we present FootApp, an AI-based system for football match annotation. First, our system relies on an advanced and mixed user interface that exploits both vocal and touch interaction. Second, the motor performance of players is captured and processed by applying machine learning algorithms to data collected from inertial sensors worn by players. Artificial intelligence techniques are then used to check the consistency of generated labels, including those regarding the physical activity of players, to automatically recognize annotation errors. Notably, we implemented a full prototype of the proposed system, performing experiments to show its effectiveness in a real-world adoption scenario

    An intelligent implementation of multi-sensing data fusion with neuromorphic computing for human activity recognition

    Get PDF
    The increasing demand for considering multi-sensor data fusion technology has drawn attention for precise human activity recognition over standalone technology due to its reliability and robustness. This paper presents a framework that fuses data from multiple sensing systems and applies Neuromorphic computing to sense and classify human activities. The data is collected by utilizing Inertial Measurement Unit (IMU) sensors, software-defined radios, and radars and feature extraction and selection are performed on the data. For each of the actions, such as sitting and standing, an activity matrix is generated, which is then fed into a discrete Hopfield neural network as a binary feature pattern for one-shot learning. Following the Hopfield network neurons’ feedback output, the conformity to the standard activity feature pattern is also determined. Following the Hopfield network neurons’ feedback output, the training of neurons is completed after 2 steps under the Hebbian learning law, and the conformity to the standard activity feature pattern is also determined. According to probabilistic statistics on inference predictions, the proposed method that Neuromorphic computing of the three data fused framework achieved the Box-plot for highest lower quartile output of 95.34%, while the confusion matrix classification accuracy of the two activities was 98.98%. The results have shown that Neuromorphic computing is most capable for multi-sensor data fusion-based human activity recognition. Furthermore, the proposed method can be enhanced by incorporating additional hardware signal processing in the system to enable the flexible integration of human activity data
    corecore