185 research outputs found

    HF-SCA: Hands-Free Strong Customer Authentication Based on a Memory-Guided Attention Mechanisms

    Get PDF
    Strong customer authentication (SCA) is a requirement of the European Union Revised Directive on Payment Services (PSD2) which ensures that electronic payments are performed with multifactor authentication. While increasing the security of electronic payments, the SCA impacted seriously on the shopping carts abandonment: an Italian bank computed that 22% of online purchases in the first semester of 2021 did not complete because of problems with the SCA. Luckily, the PSD2 allows the use of transaction risk analysis tool to exempt the SCA process. In this paper, we propose an unsupervised novel combination of existing machine learning techniques able to determine if a purchase is typical or not for a specific customer, so that in the case of a typical purchase the SCA could be exempted. We modified a well-known architecture (U-net) by replacing convolutional blocks with squeeze-and-excitation blocks. After that, a memory network was added in a latent space and an attention mechanism was introduced in the decoding side of the network. The proposed solution was able to detect nontypical purchases by creating temporal correlations between transactions. The network achieved 97.7% of AUC score over a well-known dataset retrieved online. By using this approach, we found that 98% of purchases could be executed by securely exempting the SCA, while shortening the customer’s journey and providing an elevated user experience. As an additional validation, we developed an Alexa skill for Amazon smart glasses which allows a user to shop and pay online by merely using vocal interaction, leaving the hands free to perform other activities, for example driving a car

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces

    Deep learning-based adaptive compression and anomaly detection for smart B5G use cases operation

    Get PDF
    The evolution towards next-generation Beyond 5G (B5G) networks will require not only innovation in transport technologies but also the adoption of smarter, more efficient operations of the use cases that are foreseen to be the high consumers of network resources in the next decades. Among different B5G use cases, the Digital Twin (DT) has been identified as a key high bandwidth-demanding use case. The creation and operation of a DT require the continuous collection of an enormous and widely distributed amount of sensor telemetry data which can overwhelm the transport layer. Therefore, the reduction in such transported telemetry data is an essential objective of smart use case operation. Moreover, deep telemetry data analysis, i.e., anomaly detection, can be executed in a hierarchical way to reduce the processing needed to perform such analysis in a centralized way. In this paper, we propose a smart management system consisting of a hierarchical architecture for telemetry sensor data analysis using deep autoencoders (AEs). The system contains AE-based methods for the adaptive compression of telemetry time series data using pools of AEs (called AAC), as well as for anomaly detection in single (called SS-AD) and multiple (called MS-AGD) sensor streams. Numerical results using experimental telemetry data show compression ratios of up to 64% with reconstruction errors of less than 1%, clearly improving upon the benchmark state-of-the-art methods. In addition, fast and accurate anomaly detection is demonstrated for both single and multiple-sensor scenarios. Finally, a great reduction in transport network capacity resources of 50% and more is obtained by smart use case operation for distributed DT scenarios.This research was funded by the European Commission Horizon Europe SNS JU DESIRE6G project (G.A. 101096466), by the AEI through the IBON project (PID2020-114135RB-I00), and by the ICREA institution.Peer ReviewedPostprint (published version

    Deep learning-based adaptive compression and anomaly detection for smart B5G use cases operation

    Get PDF
    The evolution towards next-generation Beyond 5G (B5G) networks will require not only innovation in transport technologies but also the adoption of smarter, more efficient operations of the use cases that are foreseen to be the high consumers of network resources in the next decades. Among different B5G use cases, the Digital Twin (DT) has been identified as a key high bandwidth-demanding use case. The creation and operation of a DT require the continuous collection of an enormous and widely distributed amount of sensor telemetry data which can overwhelm the transport layer. Therefore, the reduction in such transported telemetry data is an essential objective of smart use case operation. Moreover, deep telemetry data analysis, i.e., anomaly detection, can be executed in a hierarchical way to reduce the processing needed to perform such analysis in a centralized way. In this paper, we propose a smart management system consisting of a hierarchical architecture for telemetry sensor data analysis using deep autoencoders (AEs). The system contains AE-based methods for the adaptive compression of telemetry time series data using pools of AEs (called AAC), as well as for anomaly detection in single (called SS-AD) and multiple (called MS-AGD) sensor streams. Numerical results using experimental telemetry data show compression ratios of up to 64% with reconstruction errors of less than 1%, clearly improving upon the benchmark state-of-the-art methods. In addition, fast and accurate anomaly detection is demonstrated for both single and multiple-sensor scenarios. Finally, a great reduction in transport network capacity resources of 50% and more is obtained by smart use case operation for distributed DT scenarios.This research was funded by the European Commission Horizon Europe SNS JU DESIRE6G project (G.A. 101096466), by the AEI through the IBON project (PID2020-114135RB-I00), and by the ICREA institution.Peer ReviewedPostprint (published version
    • …
    corecore