1,892 research outputs found

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    Enhancing Credit Card Fraud Detection: An Ensemble Machine Learning Approach

    Get PDF
    In the era of digital advancements, the escalation of credit card fraud necessitates the development of robust and efficient fraud detection systems. This paper delves into the application of machine learning models, specifically focusing on ensemble methods, to enhance credit card fraud detection. Through an extensive review of existing literature, we identified limitations in current fraud detection technologies, including issues like data imbalance, concept drift, false positives/negatives, limited generalisability, and challenges in real-time processing. To address some of these shortcomings, we propose a novel ensemble model that integrates a Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Random Forest (RF), Bagging, and Boosting classifiers. This ensemble model tackles the dataset imbalance problem associated with most credit card datasets by implementing under-sampling and the Synthetic Over-sampling Technique (SMOTE) on some machine learning algorithms. The evaluation of the model utilises a dataset comprising transaction records from European credit card holders, providing a realistic scenario for assessment. The methodology of the proposed model encompasses data pre-processing, feature engineering, model selection, and evaluation, with Google Colab computational capabilities facilitating efficient model training and testing. Comparative analysis between the proposed ensemble model, traditional machine learning methods, and individual classifiers reveals the superior performance of the ensemble in mitigating challenges associated with credit card fraud detection. Across accuracy, precision, recall, and F1-score metrics, the ensemble outperforms existing models. This paper underscores the efficacy of ensemble methods as a valuable tool in the battle against fraudulent transactions. The findings presented lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as credit card fraud techniques continue to evolve

    Federated Learning for Predictive Healthcare Analytics: From theory to real world applications

    Get PDF
    In the contemporary landscape, machine learning has a pervasive impact across virtually all industries. However, the success of these systems hinges on the accessibility of training data. In today's world, every device generates data, which can serve as the building blocks for future technologies. Conventional machine learning methods rely on centralized data for training, but the availability of sufficient and valid data is often hindered by privacy concerns. Data privacy is the main concern while developing a healthcare system. One of the technique which allow decentralized learning is Federated Learning. Researchers have been actively applying this approach in various domains and have received a positive response. This paper underscores the significance of employing Federated Learning in the healthcare sector, emphasizing the wealth of data present in hospitals and electronic health records that could be used to train medical systems

    Sentimental analysis of audio based customer reviews without textual conversion

    Get PDF
    The current trends or procedures followed in the customer relation management system (CRM) are based on reviews, mails, and other textual data, gathered in the form of feedback from the customers. Sentiment analysis algorithms are deployed in order to gain polarity results, which can be used to improve customer services. But with evolving technologies, lately reviews or feedbacks are being dominated by audio data. As per literature, the audio contents are being translated to text and sentiments are analyzed using natural processing language techniques. However, these approaches can be time consuming. The proposed work focuses on analyzing the sentiments on the audio data itself without any textual conversion. The basic sentiment analysis polarities are mostly termed as positive, negative, and natural. But the focus is to make use of basic emotions as the base of deciding the polarity. The proposed model uses deep neural network and features such as Mel frequency cepstral coefficients (MFCC), Chroma and Mel Spectrogram on audio-based reviews

    Application of Computer Vision and Mobile Systems in Education: A Systematic Review

    Get PDF
    The computer vision industry has experienced a significant surge in growth, resulting in numerous promising breakthroughs in computer intelligence. The present review paper outlines the advantages and potential future implications of utilizing this technology in education. A total of 84 research publications have been thoroughly scrutinized and analyzed. The study revealed that computer vision technology integrated with a mobile application is exceptionally useful in monitoring students’ perceptions and mitigating academic dishonesty. Additionally, it facilitates the digitization of handwritten scripts for plagiarism detection and automates attendance tracking to optimize valuable classroom time. Furthermore, several potential applications of computer vision technology for educational institutions have been proposed to enhance students’ learning processes in various faculties, such as engineering, medical science, and others. Moreover, the technology can also aid in creating a safer campus environment by automatically detecting abnormal activities such as ragging, bullying, and harassment

    An explainable deep-learning architecture for pediatric sleep apnea identification from overnight airflow and oximetry signals

    Get PDF
    Producción CientíficaDeep-learning algorithms have been proposed to analyze overnight airflow (AF) and oximetry (SpO2) signals to simplify the diagnosis of pediatric obstructive sleep apnea (OSA), but current algorithms are hardly interpretable. Explainable artificial intelligence (XAI) algorithms can clarify the models-derived predictions on these signals, enhancing their diagnostic trustworthiness. Here, we assess an explainable architecture that combines convolutional and recurrent neural networks (CNN + RNN) to detect pediatric OSA and its severity. AF and SpO2 were obtained from the Childhood Adenotonsillectomy Trial (CHAT) public database (n = 1,638) and a proprietary database (n = 974). These signals were arranged in 30-min segments and processed by the CNN + RNN architecture to derive the number of apneic events per segment. The apnea-hypopnea index (AHI) was computed from the CNN + RNN-derived estimates and grouped into four OSA severity levels. The Gradient-weighted Class Activation Mapping (Grad-CAM) XAI algorithm was used to identify and interpret novel OSA-related patterns of interest. The AHI regression reached very high agreement (intraclass correlation coefficient > 0.9), while OSA severity classification achieved 4-class accuracies 74.51% and 62.31%, and 4-class Cohen’s Kappa 0.6231 and 0.4495, in CHAT and the private datasets, respectively. All diagnostic accuracies on increasing AHI cutoffs (1, 5 and 10 events/h) surpassed 84%. The Grad-CAM heatmaps revealed that the model focuses on sudden AF cessations and SpO2 drops to detect apneas and hypopneas with desaturations, and often discards patterns of hypopneas linked to arousals. Therefore, an interpretable CNN + RNN model to analyze AF and SpO2 can be helpful as a diagnostic alternative in symptomatic children at risk of OSA.Ministerio de Ciencia e Innovación /AEI/10.13039/501100011033/ FEDER (grants PID2020-115468RB-I00 and PDC2021-120775-I00)CIBER -Consorcio Centro de Investigación Biomédica en Red- (CB19/01/00012), Instituto de Salud Carlos IIINational Institutes of Health (HL083075, HL083129, UL1-RR-024134, UL1 RR024989)National Heart, Lung, and Blood Institute (R24 HL114473, 75N92019R002)Ministerio de Ciencia e Innovación - Agencia Estatal de Investigación- “Ramón y Cajal” grant (RYC2019-028566-I

    Artificial intelligence for predictive biomarker discovery in immuno-oncology: a systematic review

    Get PDF
    Background: The widespread use of immune checkpoint inhibitors (ICIs) has revolutionised treatment of multiple cancer types. However, selecting patients who may benefit from ICI remains challenging. Artificial intelligence (AI) approaches allow exploitation of high-dimension oncological data in research and development of precision immuno-oncology. Materials and methods: We conducted a systematic literature review of peer-reviewed original articles studying the ICI efficacy prediction in cancer patients across five data modalities: genomics (including genomics, transcriptomics, and epigenomics), radiomics, digital pathology (pathomics), and real-world and multimodality data. Results: A total of 90 studies were included in this systematic review, with 80% published in 2021-2022. Among them, 37 studies included genomic, 20 radiomic, 8 pathomic, 20 real-world, and 5 multimodal data. Standard machine learning (ML) methods were used in 72% of studies, deep learning (DL) methods in 22%, and both in 6%. The most frequently studied cancer type was non-small-cell lung cancer (36%), followed by melanoma (16%), while 25% included pan-cancer studies. No prospective study design incorporated AI-based methodologies from the outset; rather, all implemented AI as a post hoc analysis. Novel biomarkers for ICI in radiomics and pathomics were identified using AI approaches, and molecular biomarkers have expanded past genomics into transcriptomics and epigenomics. Finally, complex algorithms and new types of AI-based markers, such as meta-biomarkers, are emerging by integrating multimodal/multi-omics data. Conclusion: AI-based methods have expanded the horizon for biomarker discovery, demonstrating the power of integrating multimodal data from existing datasets to discover new meta-biomarkers. While most of the included studies showed promise for AI-based prediction of benefit from immunotherapy, none provided high-level evidence for immediate practice change. A priori planned prospective trial designs are needed to cover all lifecycle steps of these software biomarkers, from development and validation to integration into clinical practice

    A study of feature extraction for Arabic calligraphy characters recognition

    Get PDF
    Optical character recognition (OCR) is one of the widely used pattern recognition systems. However, the research on ancient Arabic writing recognition has suffered from a lack of interest for decades, despite the availability of thousands of historical documents. One of the reasons for this lack of interest is the absence of a standard dataset, which is fundamental for building and evaluating an OCR system. In 2022, we published a database of ancient Arabic words as the only public dataset of characters written in Al-Mojawhar Moroccan calligraphy. Therefore, such a database needs to be studied and evaluated. In this paper, we explored the proposed database and investigated the recognition of Al-Mojawhar Arabic characters. We studied feature extraction by using the most popular descriptors used in Arabic OCR. The studied descriptors were associated with different machine learning classifiers to build recognition models and verify their performance. In order to compare the learned and handcrafted features on the proposed dataset, we proposed a deep convolutional neural network for character recognition. Regarding the complexity of the character shapes, the results obtained were very promising, especially by using the convolutional neural network model, which gave the highest accuracy score

    Robustness, Heterogeneity and Structure Capturing for Graph Representation Learning and its Application

    Get PDF
    Graph neural networks (GNNs) are potent methods for graph representation learn- ing (GRL), which extract knowledge from complicated (graph) structured data in various real-world scenarios. However, GRL still faces many challenges. Firstly GNN-based node classification may deteriorate substantially by overlooking the pos- sibility of noisy data in graph structures, as models wrongly process the relation among nodes in the input graphs as the ground truth. Secondly, nodes and edges have different types in the real-world and it is essential to capture this heterogeneity in graph representation learning. Next, relations among nodes are not restricted to pairwise relations and it is necessary to capture the complex relations accordingly. Finally, the absence of structural encodings, such as positional information, deterio- rates the performance of GNNs. This thesis proposes novel methods to address the aforementioned problems: 1. Bayesian Graph Attention Network (BGAT): Developed for situations with scarce data, this method addresses the influence of spurious edges. Incor- porating Bayesian principles into the graph attention mechanism enhances robustness, leading to competitive performance against benchmarks (Chapter 3). 2. Neighbour Contrastive Heterogeneous Graph Attention Network (NC-HGAT): By enhancing a cutting-edge self-supervised heterogeneous graph neural net- work model (HGAT) with neighbour contrastive learning, this method ad- dresses heterogeneity and uncertainty simultaneously. Extra attention to edge relations in heterogeneous graphs also aids in subsequent classification tasks (Chapter 4). 3. A novel ensemble learning framework is introduced for predicting stock price movements. It adeptly captures both group-level and pairwise relations, lead- ing to notable advancements over the existing state-of-the-art. The integration of hypergraph and graph models, coupled with the utilisation of auxiliary data via GNNs before recurrent neural network (RNN), provides a deeper under- standing of long-term dependencies between similar entities in multivariate time series analysis (Chapter 5). 4. A novel framework for graph structure learning is introduced, segmenting graphs into distinct patches. By harnessing the capabilities of transformers and integrating other position encoding techniques, this approach robustly capture intricate structural information within a graph. This results in a more comprehensive understanding of its underlying patterns (Chapter 6)
    corecore