5,487 research outputs found

    Deep Learning for Vein Biometric Recognition on a Smartphone

    Get PDF
    The ongoing COVID-19 pandemic has pointed out, even more, the important need for hygiene contactless biometric recognition systems. Vein-based devices are great non-contact options although they have not been entirely well-integrated in daily life. In this work, in an attempt to contribute to the research and development of these devices, a contactless wrist vein recognition system with a real-life application is revealed. A Transfer Learning (TL) method, based on different Deep Convolutional Neural Networks architectures, for Vascular Biometric Recognition (VBR), has been designed and tested, for the first time in a research approach, on a smartphone. TL is a Deep Learning (DL) technique that could be divided into networks as feature extractor, i.e., using a pre-trained (different large-scale dataset) Convolutional Neural Network (CNN) to obtain unique features that then, are classified with a traditional Machine Learning algorithm, and fine-tuning, i.e., training a CNN that has been initialized with weights of a pre-trained (different large-scale dataset) CNN. In this study, a feature extractor base method has been employed. Several architecture networks have been tested on different wrist vein datasets: UC3M-CV1, UC3M-CV2, and PUT. The DL model has been integrated on the Xiaomi© Pocophone F1 and the Xiaomi© Mi 8 smartphones obtaining high biometric performance, up to 98% of accuracy and less than 0.4% of EER with a 50–50% train-test on UC3M-CV2, and fast identification/verification time, less than 300 milliseconds. The results infer, high DL performance and integration reachable in VBR without direct user-device contact, for real-life applications nowadays

    Cloud Computing Adoption for E-Commerce in Developing Countries: Contributing Factors and Its Implication for Indonesia

    Get PDF
    This study examines literature in cloud computing adoption for e-commerce in developing countries. The goal is to investigate contributing factors affecting cloud computing adoption of e-commerce in developing countries, in particular its implication for Indonesia. Ten themes have been identified: business size and type, customer service improvement, security, economic value, infrastructure, business process improvement, cloud computing framework, regulatory framework, user acceptance, and stakeholders’ support. Among these ten themes, the infrastructure, security, stakeholders’ support, regulatory framework, user acceptance and business size/types themes are particularly relevant to Indonesia. The paper also presents efforts and projects that are currently in place, at the governmental level, that facilitates cloud computing adoption and e-commerce in Indonesia

    Towards Accurate Run-Time Hardware-Assisted Stealthy Malware Detection: A Lightweight, yet Effective Time Series CNN-Based Approach

    Get PDF
    According to recent security analysis reports, malicious software (a.k.a. malware) is rising at an alarming rate in numbers, complexity, and harmful purposes to compromise the security of modern computer systems. Recently, malware detection based on low-level hardware features (e.g., Hardware Performance Counters (HPCs) information) has emerged as an effective alternative solution to address the complexity and performance overheads of traditional software-based detection methods. Hardware-assisted Malware Detection (HMD) techniques depend on standard Machine Learning (ML) classifiers to detect signatures of malicious applications by monitoring built-in HPC registers during execution at run-time. Prior HMD methods though effective have limited their study on detecting malicious applications that are spawned as a separate thread during application execution, hence detecting stealthy malware patterns at run-time remains a critical challenge. Stealthy malware refers to harmful cyber attacks in which malicious code is hidden within benign applications and remains undetected by traditional malware detection approaches. In this paper, we first present a comprehensive review of recent advances in hardware-assisted malware detection studies that have used standard ML techniques to detect the malware signatures. Next, to address the challenge of stealthy malware detection at the processor’s hardware level, we propose StealthMiner, a novel specialized time series machine learning-based approach to accurately detect stealthy malware trace at run-time using branch instructions, the most prominent HPC feature. StealthMiner is based on a lightweight time series Fully Convolutional Neural Network (FCN) model that automatically identifies potentially contaminated samples in HPC-based time series data and utilizes them to accurately recognize the trace of stealthy malware. Our analysis demonstrates that using state-of-the-art ML-based malware detection methods is not effective in detecting stealthy malware samples since the captured HPC data not only represents malware but also carries benign applications’ microarchitectural data. The experimental results demonstrate that with the aid of our novel intelligent approach, stealthy malware can be detected at run-time with 94% detection performance on average with only one HPC feature, outperforming the detection performance of state-of-the-art HMD and general time series classification methods by up to 42% and 36%, respectively

    Power System Stability Analysis using Neural Network

    Full text link
    This work focuses on the design of modern power system controllers for automatic voltage regulators (AVR) and the applications of machine learning (ML) algorithms to correctly classify the stability of the IEEE 14 bus system. The LQG controller performs the best time domain characteristics compared to PID and LQG, while the sensor and amplifier gain is changed in a dynamic passion. After that, the IEEE 14 bus system is modeled, and contingency scenarios are simulated in the System Modelica Dymola environment. Application of the Monte Carlo principle with modified Poissons probability distribution principle is reviewed from the literature that reduces the total contingency from 1000k to 20k. The damping ratio of the contingency is then extracted, pre-processed, and fed to ML algorithms, such as logistic regression, support vector machine, decision trees, random forests, Naive Bayes, and k-nearest neighbor. A neural network (NN) of one, two, three, five, seven, and ten hidden layers with 25%, 50%, 75%, and 100% data size is considered to observe and compare the prediction time, accuracy, precision, and recall value. At lower data size, 25%, in the neural network with two-hidden layers and a single hidden layer, the accuracy becomes 95.70% and 97.38%, respectively. Increasing the hidden layer of NN beyond a second does not increase the overall score and takes a much longer prediction time; thus could be discarded for similar analysis. Moreover, when five, seven, and ten hidden layers are used, the F1 score reduces. However, in practical scenarios, where the data set contains more features and a variety of classes, higher data size is required for NN for proper training. This research will provide more insight into the damping ratio-based system stability prediction with traditional ML algorithms and neural networks.Comment: Masters Thesis Dissertatio

    Internet Predictions

    Get PDF
    More than a dozen leading experts give their opinions on where the Internet is headed and where it will be in the next decade in terms of technology, policy, and applications. They cover topics ranging from the Internet of Things to climate change to the digital storage of the future. A summary of the articles is available in the Web extras section

    Mustererkennungsbasierte Verteidgung gegen gezielte Angriffe

    Get PDF
    The speed at which everything and everyone is being connected considerably outstrips the rate at which effective security mechanisms are introduced to protect them. This has created an opportunity for resourceful threat actors which have specialized in conducting low-volume persistent attacks through sophisticated techniques that are tailored to specific valuable targets. Consequently, traditional approaches are rendered ineffective against targeted attacks, creating an acute need for innovative defense mechanisms. This thesis aims at supporting the security practitioner in bridging this gap by introducing a holistic strategy against targeted attacks that addresses key challenges encountered during the phases of detection, analysis and response. The structure of this thesis is therefore aligned to these three phases, with each one of its central chapters taking on a particular problem and proposing a solution built on a strong foundation on pattern recognition and machine learning. In particular, we propose a detection approach that, in the absence of additional authentication mechanisms, allows to identify spear-phishing emails without relying on their content. Next, we introduce an analysis approach for malware triage based on the structural characterization of malicious code. Finally, we introduce MANTIS, an open-source platform for authoring, sharing and collecting threat intelligence, whose data model is based on an innovative unified representation for threat intelligence standards based on attributed graphs. As a whole, these ideas open new avenues for research on defense mechanisms and represent an attempt to counteract the imbalance between resourceful actors and society at large.In unserer heutigen Welt sind alle und alles miteinander vernetzt. Dies bietet mĂ€chtigen Angreifern die Möglichkeit, komplexe Verfahren zu entwickeln, die auf spezifische Ziele angepasst sind. Traditionelle AnsĂ€tze zur BekĂ€mpfung solcher Angriffe werden damit ineffektiv, was die Entwicklung innovativer Methoden unabdingbar macht. Die vorliegende Dissertation verfolgt das Ziel, den Sicherheitsanalysten durch eine umfassende Strategie gegen gezielte Angriffe zu unterstĂŒtzen. Diese Strategie beschĂ€ftigt sich mit den hauptsĂ€chlichen Herausforderungen in den drei Phasen der Erkennung und Analyse von sowie der Reaktion auf gezielte Angriffe. Der Aufbau dieser Arbeit orientiert sich daher an den genannten drei Phasen. In jedem Kapitel wird ein Problem aufgegriffen und eine entsprechende Lösung vorgeschlagen, die stark auf maschinellem Lernen und Mustererkennung basiert. Insbesondere schlagen wir einen Ansatz vor, der eine Identifizierung von Spear-Phishing-Emails ermöglicht, ohne ihren Inhalt zu betrachten. Anschliessend stellen wir einen Analyseansatz fĂŒr Malware Triage vor, der auf der strukturierten Darstellung von Code basiert. Zum Schluss stellen wir MANTIS vor, eine Open-Source-Plattform fĂŒr Authoring, Verteilung und Sammlung von Threat Intelligence, deren Datenmodell auf einer innovativen konsolidierten Graphen-Darstellung fĂŒr Threat Intelligence Stardards basiert. Wir evaluieren unsere AnsĂ€tze in verschiedenen Experimenten, die ihren potentiellen Nutzen in echten Szenarien beweisen. Insgesamt bereiten diese Ideen neue Wege fĂŒr die Forschung zu Abwehrmechanismen und erstreben, das Ungleichgewicht zwischen mĂ€chtigen Angreifern und der Gesellschaft zu minimieren

    A Capability Approach for Designing Business Intelligence and Analytics Architectures

    Get PDF
    Business Intelligence and Analytics (BIA) is subject to an ongoing transformation, both on the technology and the business side. Given the lack of ready-to-use blueprints for the plethora of novel solutions and the ever-increasing variety of available concepts and tools, there is a need for conceptual support for architecture design decisions. After conducting a series of interviews to explore the relevance and direction of an architectural decision support concept, we propose a capability schema that involves actions, expected outcomes, and environmental limitations to identify fitting architecture designs. The applicability of the approach was evaluated with two cases. The results show that the derived framework can support the systematic development of fundamental architecture requirements. The work contributes to research by illustrating how to capture the elusive capability concept and showing its relation to BIA architectures. For further generalization, we created an open online repository to collect BIA capabilities and architectural designs

    BASALISC: Programmable Hardware Accelerator for BGV Fully Homomorphic Encryption

    Get PDF
    Fully Homomorphic Encryption (FHE) allows for secure computation on encrypted data. Unfortunately, huge memory size, computational cost and bandwidth requirements limit its practicality. We present BASALISC, an architecture family of hardware accelerators that aims to substantially accelerate FHE computations in the cloud. BASALISC is the first to implement the BGV scheme with fully-packed bootstrapping – the noise removal capability necessary for arbitrary-depth computation. It supports a customized version of bootstrapping that can be instantiated with hardware multipliers optimized for area and power. BASALISC is a three-abstraction-layer RISC architecture, designed for a 1 GHz ASIC implementation and underway toward 150mm2 die tape-out in a 12nm GF process. BASALISC\u27s four-layer memory hierarchy includes a two-dimensional conflict-free inner memory layer that enables 32 Tb/s radix-256 NTT computations without pipeline stalls. Its conflict-resolution permutation hardware is generalized and re-used to compute BGV automorphisms without throughput penalty. BASALISC also has a custom multiply-accumulate unit to accelerate BGV key switching. The BASALISC toolchain comprises a custom compiler and a joint performance and correctness simulator. To evaluate BASALISC, we study its physical realizability, emulate and formally verify its core functional units, and we study its performance on a set of benchmarks. Simulation results show a speedup of more than 5,000× over HElib – a popular software FHE library

    An architecture to predict anomalies in industrial processes

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceThe Internet of Things (IoT) and machine learning algorithms (ML) are enabling a revolutionary change in digitization in numerous areas, benefiting Industry 4.0 in particular. Predictive maintenance using machine learning models is being used to protect assets in industry. In this paper, an architecture for predicting anomalies in industrial processes was proposed in which SMEs can be guided in implementing an IIoT architecture for predictive maintenance (PdM). This research was conducted to understand what machine learning architectures and models are generally used by industry for PdM. An overview of the concepts of the Industrial Internet of Things (IIoT), machine learning (ML), and predictive maintenance (PdM) was provided, and through a systematic literature review, it was possible to understand their applications and which technologies enable their use. The survey revealed that PdM applications are increasingly common and that there are many studies on the development of new ML techniques. The survey conducted confirmed the usefulness of the artifact and showed the need for an architecture to guide the implementation of PdM. This research can be a contribution for SMEs, allowing them to become more efficient and reduce both production and maintenance costs in order to keep up with multinational companies
    • 

    corecore