17 research outputs found

    Lightweight Anomaly Detection Scheme Using Incremental Principal Component Analysis and Support Vector Machine

    Get PDF
    Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, oneclass learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with O(nd) memory utilization and no communication overhead

    Spectral and spatial methods for the classification of urban remote sensing data

    Get PDF
    Lors de ces travaux, nous nous sommes intéressés au problème de la classification supervisée d'images satellitaires de zones urbaines. Les données traitées sont des images optiques à très hautes résolutions spatiales: données panchromatiques à très haute résolution spatiale (IKONOS, QUICKBIRD, simulations PLEIADES) et des images hyperspectrales (DAIS, ROSIS). Deux stratégies ont été proposées. La première stratégie consiste en une phase d'extraction de caractéristiques spatiales et spectrales suivie d'une phase de classification. Ces caractéristiques sont extraites par filtrages morphologiques : ouvertures et fermetures géodésiques et filtrages surfaciques auto-complémentaires. La classification est réalisée avec les machines à vecteurs supports (SVM) non linéaires. Nous proposons la définition d'un noyau spatio-spectral utilisant de manière conjointe l'information spatiale et l'information spectrale extraites lors de la première phase. La seconde stratégie consiste en une phase de fusion de données pre- ou post-classification. Lors de la fusion postclassification, divers classifieurs sont appliqués, éventuellement sur plusieurs données issues d'une même scène (image panchromat ique, image multi-spectrale). Pour chaque pixel, l'appartenance à chaque classe est estimée à l'aide des classifieurs. Un schéma de fusion adaptatif permettant d'utiliser l'information sur la fiabilité locale de chaque classifieur, mais aussi l'information globale disponible a priori sur les performances de chaque algorithme pour les différentes classes, est proposé. Les différents résultats sont fusionnés à l'aide d'opérateurs flous. Les méthodes ont été validées sur des images réelles. Des améliorations significatives sont obtenues par rapport aux méthodes publiées dans la litterature

    Image Based Biomarkers from Magnetic Resonance Modalities: Blending Multiple Modalities, Dimensions and Scales.

    Get PDF
    The successful analysis and processing of medical imaging data is a multidisciplinary work that requires the application and combination of knowledge from diverse fields, such as medical engineering, medicine, computer science and pattern classification. Imaging biomarkers are biologic features detectable by imaging modalities and their use offer the prospect of more efficient clinical studies and improvement in both diagnosis and therapy assessment. The use of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) and its application to the diagnosis and therapy has been extensively validated, nevertheless the issue of an appropriate or optimal processing of data that helps to extract relevant biomarkers to highlight the difference between heterogeneous tissue still remains. Together with DCE-MRI, the data extracted from Diffusion MRI (DWI-MR and DTI-MR) represents a promising and complementary tool. This project initially proposes the exploration of diverse techniques and methodologies for the characterization of tissue, following an analysis and classification of voxel-level time-intensity curves from DCE-MRI data mainly through the exploration of dissimilarity based representations and models. We will explore metrics and representations to correlate the multidimensional data acquired through diverse imaging modalities, a work which starts with the appropriate elastic registration methodology between DCE-MRI and DWI- MR on the breast and its corresponding validation. It has been shown that the combination of multi-modal MRI images improve the discrimination of diseased tissue. However the fusion of dissimilar imaging data for classification and segmentation purposes is not a trivial task, there is an inherent difference in information domains, dimensionality and scales. This work also proposes a multi-view consensus clustering methodology for the integration of multi-modal MR images into a unified segmentation of tumoral lesions for heterogeneity assessment. Using a variety of metrics and distance functions this multi-view imaging approach calculates multiple vectorial dissimilarity-spaces for each one of the MRI modalities and makes use of the concepts behind cluster ensembles to combine a set of base unsupervised segmentations into an unified partition of the voxel-based data. The methodology is specially designed for combining DCE-MRI and DTI-MR, for which a manifold learning step is implemented in order to account for the geometric constrains of the high dimensional diffusion information.The successful analysis and processing of medical imaging data is a multidisciplinary work that requires the application and combination of knowledge from diverse fields, such as medical engineering, medicine, computer science and pattern classification. Imaging biomarkers are biologic features detectable by imaging modalities and their use offer the prospect of more efficient clinical studies and improvement in both diagnosis and therapy assessment. The use of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) and its application to the diagnosis and therapy has been extensively validated, nevertheless the issue of an appropriate or optimal processing of data that helps to extract relevant biomarkers to highlight the difference between heterogeneous tissue still remains. Together with DCE-MRI, the data extracted from Diffusion MRI (DWI-MR and DTI-MR) represents a promising and complementary tool. This project initially proposes the exploration of diverse techniques and methodologies for the characterization of tissue, following an analysis and classification of voxel-level time-intensity curves from DCE-MRI data mainly through the exploration of dissimilarity based representations and models. We will explore metrics and representations to correlate the multidimensional data acquired through diverse imaging modalities, a work which starts with the appropriate elastic registration methodology between DCE-MRI and DWI- MR on the breast and its corresponding validation. It has been shown that the combination of multi-modal MRI images improve the discrimination of diseased tissue. However the fusion of dissimilar imaging data for classification and segmentation purposes is not a trivial task, there is an inherent difference in information domains, dimensionality and scales. This work also proposes a multi-view consensus clustering methodology for the integration of multi-modal MR images into a unified segmentation of tumoral lesions for heterogeneity assessment. Using a variety of metrics and distance functions this multi-view imaging approach calculates multiple vectorial dissimilarity-spaces for each one of the MRI modalities and makes use of the concepts behind cluster ensembles to combine a set of base unsupervised segmentations into an unified partition of the voxel-based data. The methodology is specially designed for combining DCE-MRI and DTI-MR, for which a manifold learning step is implemented in order to account for the geometric constrains of the high dimensional diffusion information

    CLASSIFIERS BASED ON A NEW APPROACH TO ESTIMATE THE FISHER SUBSPACE AND THEIR APPLICATIONS

    Get PDF
    In this thesis we propose a novel classifier, and its extensions, based on a novel estimation of the Fisher Subspace. The proposed classifiers have been developed to deal with high dimensional and highly unbalanced datasets whose cardinality is low. The efficacy of the proposed techniques has been proved by the results achieved on real and synthetic datasets, and by the comparison with state of the art predictors

    Featured Anomaly Detection Methods and Applications

    Get PDF
    Anomaly detection is a fundamental research topic that has been widely investigated. From critical industrial systems, e.g., network intrusion detection systems, to people’s daily activities, e.g., mobile fraud detection, anomaly detection has become the very first vital resort to protect and secure public and personal properties. Although anomaly detection methods have been under consistent development over the years, the explosive growth of data volume and the continued dramatic variation of data patterns pose great challenges on the anomaly detection systems and are fuelling the great demand of introducing more intelligent anomaly detection methods with distinct characteristics to cope with various needs. To this end, this thesis starts with presenting a thorough review of existing anomaly detection strategies and methods. The advantageous and disadvantageous of the strategies and methods are elaborated. Afterward, four distinctive anomaly detection methods, especially for time series, are proposed in this work aiming at resolving specific needs of anomaly detection under different scenarios, e.g., enhanced accuracy, interpretable results, and self-evolving models. Experiments are presented and analysed to offer a better understanding of the performance of the methods and their distinct features. To be more specific, the abstracts of the key contents in this thesis are listed as follows: 1) Support Vector Data Description (SVDD) is investigated as a primary method to fulfill accurate anomaly detection. The applicability of SVDD over noisy time series datasets is carefully examined and it is demonstrated that relaxing the decision boundary of SVDD always results in better accuracy in network time series anomaly detection. Theoretical analysis of the parameter utilised in the model is also presented to ensure the validity of the relaxation of the decision boundary. 2) To support a clear explanation of the detected time series anomalies, i.e., anomaly interpretation, the periodic pattern of time series data is considered as the contextual information to be integrated into SVDD for anomaly detection. The formulation of SVDD with contextual information maintains multiple discriminants which help in distinguishing the root causes of the anomalies. 3) In an attempt to further analyse a dataset for anomaly detection and interpretation, Convex Hull Data Description (CHDD) is developed for realising one-class classification together with data clustering. CHDD approximates the convex hull of a given dataset with the extreme points which constitute a dictionary of data representatives. According to the dictionary, CHDD is capable of representing and clustering all the normal data instances so that anomaly detection is realised with certain interpretation. 4) Besides better anomaly detection accuracy and interpretability, better solutions for anomaly detection over streaming data with evolving patterns are also researched. Under the framework of Reinforcement Learning (RL), a time series anomaly detector that is consistently trained to cope with the evolving patterns is designed. Due to the fact that the anomaly detector is trained with labeled time series, it avoids the cumbersome work of threshold setting and the uncertain definitions of anomalies in time series anomaly detection tasks

    Effective Fault Diagnosis in Chemical Plants By Integrating Multiple Methodologies

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings

    Multi-sensor data fusion in mobile devices for the identification of Activities of Daily Living

    Get PDF
    Following the recent advances in technology and the growing use of mobile devices such as smartphones, several solutions may be developed to improve the quality of life of users in the context of Ambient Assisted Living (AAL). Mobile devices have different available sensors, e.g., accelerometer, gyroscope, magnetometer, microphone and Global Positioning System (GPS) receiver, which allow the acquisition of physical and physiological parameters for the recognition of different Activities of Daily Living (ADL) and the environments in which they are performed. The definition of ADL includes a well-known set of tasks, which include basic selfcare tasks, based on the types of skills that people usually learn in early childhood, including feeding, bathing, dressing, grooming, walking, running, jumping, climbing stairs, sleeping, watching TV, working, listening to music, cooking, eating and others. On the context of AAL, some individuals (henceforth called user or users) need particular assistance, either because the user has some sort of impairment, or because the user is old, or simply because users need/want to monitor their lifestyle. The research and development of systems that provide a particular assistance to people is increasing in many areas of application. In particular, in the future, the recognition of ADL will be an important element for the development of a personal digital life coach, providing assistance to different types of users. To support the recognition of ADL, the surrounding environments should be also recognized to increase the reliability of these systems. The main focus of this Thesis is the research on methods for the fusion and classification of the data acquired by the sensors available in off-the-shelf mobile devices in order to recognize ADL in almost real-time, taking into account the large diversity of the capabilities and characteristics of the mobile devices available in the market. In order to achieve this objective, this Thesis started with the review of the existing methods and technologies to define the architecture and modules of the method for the identification of ADL. With this review and based on the knowledge acquired about the sensors available in off-the-shelf mobile devices, a set of tasks that may be reliably identified was defined as a basis for the remaining research and development to be carried out in this Thesis. This review also identified the main stages for the development of a new method for the identification of the ADL using the sensors available in off-the-shelf mobile devices; these stages are data acquisition, data processing, data cleaning, data imputation, feature extraction, data fusion and artificial intelligence. One of the challenges is related to the different types of data acquired from the different sensors, but other challenges were found, including the presence of environmental noise, the positioning of the mobile device during the daily activities, the limited capabilities of the mobile devices and others. Based on the acquired data, the processing was performed, implementing data cleaning and feature extraction methods, in order to define a new framework for the recognition of ADL. The data imputation methods were not applied, because at this stage of the research their implementation does not have influence in the results of the identification of the ADL and environments, as the features are extracted from a set of data acquired during a defined time interval and there are no missing values during this stage. The joint selection of the set of usable sensors and the identifiable set of tasks will then allow the development of a framework that, considering multi-sensor data fusion technologies and context awareness, in coordination with other information available from the user context, such as his/her agenda and the time of the day, will allow to establish a profile of the tasks that the user performs in a regular activity day. The classification method and the algorithm for the fusion of the features for the recognition of ADL and its environments needs to be deployed in a machine with some computational power, while the mobile device that will use the created framework, can perform the identification of the ADL using a much less computational power. Based on the results reported in the literature, the method chosen for the recognition of the ADL is composed by three variants of Artificial Neural Networks (ANN), including simple Multilayer Perceptron (MLP) networks, Feedforward Neural Networks (FNN) with Backpropagation, and Deep Neural Networks (DNN). Data acquisition can be performed with standard methods. After the acquisition, the data must be processed at the data processing stage, which includes data cleaning and feature extraction methods. The data cleaning method used for motion and magnetic sensors is the low pass filter, in order to reduce the noise acquired; but for the acoustic data, the Fast Fourier Transform (FFT) was applied to extract the different frequencies. When the data is clean, several features are then extracted based on the types of sensors used, including the mean, standard deviation, variance, maximum value, minimum value and median of raw data acquired from the motion and magnetic sensors; the mean, standard deviation, variance and median of the maximum peaks calculated with the raw data acquired from the motion and magnetic sensors; the five greatest distances between the maximum peaks calculated with the raw data acquired from the motion and magnetic sensors; the mean, standard deviation, variance, median and 26 Mel- Frequency Cepstral Coefficients (MFCC) of the frequencies obtained with FFT based on the raw data acquired from the microphone data; and the distance travelled calculated with the data acquired from the GPS receiver. After the extraction of the features, these will be grouped in different datasets for the application of the ANN methods and to discover the method and dataset that reports better results. The classification stage was incrementally developed, starting with the identification of the most common ADL (i.e., walking, running, going upstairs, going downstairs and standing activities) with motion and magnetic sensors. Next, the environments were identified with acoustic data, i.e., bedroom, bar, classroom, gym, kitchen, living room, hall, street and library. After the environments are recognized, and based on the different sets of sensors commonly available in the mobile devices, the data acquired from the motion and magnetic sensors were combined with the recognized environment in order to differentiate some activities without motion, i.e., sleeping and watching TV. The number of recognized activities in this stage was increased with the use of the distance travelled, extracted from the GPS receiver data, allowing also to recognize the driving activity. After the implementation of the three classification methods with different numbers of iterations, datasets and remaining configurations in a machine with high processing capabilities, the reported results proved that the best method for the recognition of the most common ADL and activities without motion is the DNN method, but the best method for the recognition of environments is the FNN method with Backpropagation. Depending on the number of sensors used, this implementation reports a mean accuracy between 85.89% and 89.51% for the recognition of the most common ADL, equals to 86.50% for the recognition of environments, and equals to 100% for the recognition of activities without motion, reporting an overall accuracy between 85.89% and 92.00%. The last stage of this research work was the implementation of the structured framework for the mobile devices, verifying that the FNN method requires a high processing power for the recognition of environments and the results reported with the mobile application are lower than the results reported with the machine with high processing capabilities used. Thus, the DNN method was also implemented for the recognition of the environments with the mobile devices. Finally, the results reported with the mobile devices show an accuracy between 86.39% and 89.15% for the recognition of the most common ADL, equal to 45.68% for the recognition of environments, and equal to 100% for the recognition of activities without motion, reporting an overall accuracy between 58.02% and 89.15%. Compared with the literature, the results returned by the implemented framework show only a residual improvement. However, the results reported in this research work comprehend the identification of more ADL than the ones described in other studies. The improvement in the recognition of ADL based on the mean of the accuracies is equal to 2.93%, but the maximum number of ADL and environments previously recognized was 13, while the number of ADL and environments recognized with the framework resulting from this research is 16. In conclusion, the framework developed has a mean improvement of 2.93% in the accuracy of the recognition for a larger number of ADL and environments than previously reported. In the future, the achievements reported by this PhD research may be considered as a start point of the development of a personal digital life coach, but the number of ADL and environments recognized by the framework should be increased and the experiments should be performed with different types of devices (i.e., smartphones and smartwatches), and the data imputation and other machine learning methods should be explored in order to attempt to increase the reliability of the framework for the recognition of ADL and its environments.Após os recentes avanços tecnológicos e o crescente uso dos dispositivos móveis, como por exemplo os smartphones, várias soluções podem ser desenvolvidas para melhorar a qualidade de vida dos utilizadores no contexto de Ambientes de Vida Assistida (AVA) ou Ambient Assisted Living (AAL). Os dispositivos móveis integram vários sensores, tais como acelerómetro, giroscópio, magnetómetro, microfone e recetor de Sistema de Posicionamento Global (GPS), que permitem a aquisição de vários parâmetros físicos e fisiológicos para o reconhecimento de diferentes Atividades da Vida Diária (AVD) e os seus ambientes. A definição de AVD inclui um conjunto bem conhecido de tarefas que são tarefas básicas de autocuidado, baseadas nos tipos de habilidades que as pessoas geralmente aprendem na infância. Essas tarefas incluem alimentar-se, tomar banho, vestir-se, fazer os cuidados pessoais, caminhar, correr, pular, subir escadas, dormir, ver televisão, trabalhar, ouvir música, cozinhar, comer, entre outras. No contexto de AVA, alguns indivíduos (comumente chamados de utilizadores) precisam de assistência particular, seja porque o utilizador tem algum tipo de deficiência, seja porque é idoso, ou simplesmente porque o utilizador precisa/quer monitorizar e treinar o seu estilo de vida. A investigação e desenvolvimento de sistemas que fornecem algum tipo de assistência particular está em crescente em muitas áreas de aplicação. Em particular, no futuro, o reconhecimento das AVD é uma parte importante para o desenvolvimento de um assistente pessoal digital, fornecendo uma assistência pessoal de baixo custo aos diferentes tipos de pessoas. pessoas. Para ajudar no reconhecimento das AVD, os ambientes em que estas se desenrolam devem ser reconhecidos para aumentar a fiabilidade destes sistemas. O foco principal desta Tese é o desenvolvimento de métodos para a fusão e classificação dos dados adquiridos a partir dos sensores disponíveis nos dispositivos móveis, para o reconhecimento quase em tempo real das AVD, tendo em consideração a grande diversidade das características dos dispositivos móveis disponíveis no mercado. Para atingir este objetivo, esta Tese iniciou-se com a revisão dos métodos e tecnologias existentes para definir a arquitetura e os módulos do novo método de identificação das AVD. Com esta revisão da literatura e com base no conhecimento adquirido sobre os sensores disponíveis nos dispositivos móveis disponíveis no mercado, um conjunto de tarefas que podem ser identificadas foi definido para as pesquisas e desenvolvimentos desta Tese. Esta revisão também identifica os principais conceitos para o desenvolvimento do novo método de identificação das AVD, utilizando os sensores, são eles: aquisição de dados, processamento de dados, correção de dados, imputação de dados, extração de características, fusão de dados e extração de resultados recorrendo a métodos de inteligência artificial. Um dos desafios está relacionado aos diferentes tipos de dados adquiridos pelos diferentes sensores, mas outros desafios foram encontrados, sendo os mais relevantes o ruído ambiental, o posicionamento do dispositivo durante a realização das atividades diárias, as capacidades limitadas dos dispositivos móveis. As diferentes características das pessoas podem igualmente influenciar a criação dos métodos, escolhendo pessoas com diferentes estilos de vida e características físicas para a aquisição e identificação dos dados adquiridos a partir de sensores. Com base nos dados adquiridos, realizou-se o processamento dos dados, implementando-se métodos de correção dos dados e a extração de características, para iniciar a criação do novo método para o reconhecimento das AVD. Os métodos de imputação de dados foram excluídos da implementação, pois não iriam influenciar os resultados da identificação das AVD e dos ambientes, na medida em que são utilizadas as características extraídas de um conjunto de dados adquiridos durante um intervalo de tempo definido. A seleção dos sensores utilizáveis, bem como das AVD identificáveis, permitirá o desenvolvimento de um método que, considerando o uso de tecnologias para a fusão de dados adquiridos com múltiplos sensores em coordenação com outras informações relativas ao contexto do utilizador, tais como a agenda do utilizador, permitindo estabelecer um perfil de tarefas que o utilizador realiza diariamente. Com base nos resultados obtidos na literatura, o método escolhido para o reconhecimento das AVD são as diferentes variantes das Redes Neuronais Artificiais (RNA), incluindo Multilayer Perceptron (MLP), Feedforward Neural Networks (FNN) with Backpropagation and Deep Neural Networks (DNN). No final, após a criação dos métodos para cada fase do método para o reconhecimento das AVD e ambientes, a implementação sequencial dos diferentes métodos foi realizada num dispositivo móvel para testes adicionais. Após a definição da estrutura do método para o reconhecimento de AVD e ambientes usando dispositivos móveis, verificou-se que a aquisição de dados pode ser realizada com os métodos comuns. Após a aquisição de dados, os mesmos devem ser processados no módulo de processamento de dados, que inclui os métodos de correção de dados e de extração de características. O método de correção de dados utilizado para sensores de movimento e magnéticos é o filtro passa-baixo de modo a reduzir o ruído, mas para os dados acústicos, a Transformada Rápida de Fourier (FFT) foi aplicada para extrair as diferentes frequências. Após a correção dos dados, as diferentes características foram extraídas com base nos tipos de sensores usados, sendo a média, desvio padrão, variância, valor máximo, valor mínimo e mediana de dados adquiridos pelos sensores magnéticos e de movimento, a média, desvio padrão, variância e mediana dos picos máximos calculados com base nos dados adquiridos pelos sensores magnéticos e de movimento, as cinco maiores distâncias entre os picos máximos calculados com os dados adquiridos dos sensores de movimento e magnéticos, a média, desvio padrão, variância e 26 Mel-Frequency Cepstral Coefficients (MFCC) das frequências obtidas com FFT com base nos dados obtidos a partir do microfone, e a distância calculada com os dados adquiridos pelo recetor de GPS. Após a extração das características, as mesmas são agrupadas em diferentes conjuntos de dados para a aplicação dos métodos de RNA de modo a descobrir o método e o conjunto de características que reporta melhores resultados. O módulo de classificação de dados foi incrementalmente desenvolvido, começando com a identificação das AVD comuns com sensores magnéticos e de movimento, i.e., andar, correr, subir escadas, descer escadas e parado. Em seguida, os ambientes são identificados com dados de sensores acústicos, i.e., quarto, bar, sala de aula, ginásio, cozinha, sala de estar, hall, rua e biblioteca. Com base nos ambientes reconhecidos e os restantes sensores disponíveis nos dispositivos móveis, os dados adquiridos dos sensores magnéticos e de movimento foram combinados com o ambiente reconhecido para diferenciar algumas atividades sem movimento (i.e., dormir e ver televisão), onde o número de atividades reconhecidas nesta fase aumenta com a fusão da distância percorrida, extraída a partir dos dados do recetor GPS, permitindo também reconhecer a atividade de conduzir. Após a implementação dos três métodos de classificação com diferentes números de iterações, conjuntos de dados e configurações numa máquina com alta capacidade de processamento, os resultados relatados provaram que o melhor método para o reconhecimento das atividades comuns de AVD e atividades sem movimento é o método DNN, mas o melhor método para o reconhecimento de ambientes é o método FNN with Backpropagation. Dependendo do número de sensores utilizados, esta implementação reporta uma exatidão média entre 85,89% e 89,51% para o reconhecimento das AVD comuns, igual a 86,50% para o reconhecimento de ambientes, e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão global entre 85,89% e 92,00%. A última etapa desta Tese foi a implementação do método nos dispositivos móveis, verificando que o método FNN requer um alto poder de processamento para o reconhecimento de ambientes e os resultados reportados com estes dispositivos são inferiores aos resultados reportados com a máquina com alta capacidade de processamento utilizada no desenvolvimento do método. Assim, o método DNN foi igualmente implementado para o reconhecimento dos ambientes com os dispositivos móveis. Finalmente, os resultados relatados com os dispositivos móveis reportam uma exatidão entre 86,39% e 89,15% para o reconhecimento das AVD comuns, igual a 45,68% para o reconhecimento de ambientes, e igual a 100% para o reconhecimento de atividades sem movimento, reportando uma exatidão geral entre 58,02% e 89,15%. Com base nos resultados relatados na literatura, os resultados do método desenvolvido mostram uma melhoria residual, mas os resultados desta Tese identificam mais AVD que os demais estudos disponíveis na literatura. A melhoria no reconhecimento das AVD com base na média das exatidões é igual a 2,93%, mas o número máximo de AVD e ambientes reconhecidos pelos estudos disponíveis na literatura é 13, enquanto o número de AVD e ambientes reconhecidos com o método implementado é 16. Assim, o método desenvolvido tem uma melhoria de 2,93% na exatidão do reconhecimento num maior número de AVD e ambientes. Como trabalho futuro, os resultados reportados nesta Tese podem ser considerados um ponto de partida para o desenvolvimento de um assistente digital pessoal, mas o número de ADL e ambientes reconhecidos pelo método deve ser aumentado e as experiências devem ser repetidas com diferentes tipos de dispositivos móveis (i.e., smartphones e smartwatches), e os métodos de imputação e outros métodos de classificação de dados devem ser explorados de modo a tentar aumentar a confiabilidade do método para o reconhecimento das AVD e ambientes

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis
    corecore