10,218 research outputs found

    Process of Fingerprint Authentication using Cancelable Biohashed Template

    Get PDF
    Template protection using cancelable biometrics prevents data loss and hacking stored templates, by providing considerable privacy and security. Hashing and salting techniques are used to build resilient systems. Salted password method is employed to protect passwords against different types of attacks namely brute-force attack, dictionary attack, rainbow table attacks. Salting claims that random data can be added to input of hash function to ensure unique output. Hashing salts are speed bumps in an attacker’s road to breach user’s data. Research proposes a contemporary two factor authenticator called Biohashing. Biohashing procedure is implemented by recapitulated inner product over a pseudo random number generator key, as well as fingerprint features that are a network of minutiae. Cancelable template authentication used in fingerprint-based sales counter accelerates payment process. Fingerhash is code produced after applying biohashing on fingerprint. Fingerhash is a binary string procured by choosing individual bit of sign depending on a preset threshold. Experiment is carried using benchmark FVC 2002 DB1 dataset. Authentication accuracy is found to be nearly 97\%. Results compared with state-of art approaches finds promising

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    Radio frequency fingerprint identification for Internet of Things: A survey

    Get PDF
    Radio frequency fingerprint (RFF) identification is a promising technique for identifying Internet of Things (IoT) devices. This paper presents a comprehensive survey on RFF identification, which covers various aspects ranging from related definitions to details of each stage in the identification process, namely signal preprocessing, RFF feature extraction, further processing, and RFF identification. Specifically, three main steps of preprocessing are summarized, including carrier frequency offset estimation, noise elimination, and channel cancellation. Besides, three kinds of RFFs are categorized, comprising I/Q signal-based, parameter-based, and transformation-based features. Meanwhile, feature fusion and feature dimension reduction are elaborated as two main further processing methods. Furthermore, a novel framework is established from the perspective of closed set and open set problems, and the related state-of-the-art methodologies are investigated, including approaches based on traditional machine learning, deep learning, and generative models. Additionally, we highlight the challenges faced by RFF identification and point out future research trends in this field

    An improved GBSO-TAENN-based EEG signal classification model for epileptic seizure detection.

    Get PDF
    Detection and classification of epileptic seizures from the EEG signals have gained significant attention in recent decades. Among other signals, EEG signals are extensively used by medical experts for diagnosing purposes. So, most of the existing research works developed automated mechanisms for designing an EEG-based epileptic seizure detection system. Machine learning techniques are highly used for reduced time consumption, high accuracy, and optimal performance. Still, it limits by the issues of high complexity in algorithm design, increased error value, and reduced detection efficacy. Thus, the proposed work intends to develop an automated epileptic seizure detection system with an improved performance rate. Here, the Finite Linear Haar wavelet-based Filtering (FLHF) technique is used to filter the input signals and the relevant set of features are extracted from the normalized output with the help of Fractal Dimension (FD) analysis. Then, the Grasshopper Bio-Inspired Swarm Optimization (GBSO) technique is employed to select the optimal features by computing the best fitness value and the Temporal Activation Expansive Neural Network (TAENN) mechanism is used for classifying the EEG signals to determine whether normal or seizure affected. Numerous intelligence algorithms, such as preprocessing, optimization, and classification, are used in the literature to identify epileptic seizures based on EEG signals. The primary issues facing the majority of optimization approaches are reduced convergence rates and higher computational complexity. Furthermore, the problems with machine learning approaches include a significant method complexity, intricate mathematical calculations, and a decreased training speed. Therefore, the goal of the proposed work is to put into practice efficient algorithms for the recognition and categorization of epileptic seizures based on EEG signals. The combined effect of the proposed FLHF, FD, GBSO, and TAENN models might dramatically improve disease detection accuracy while decreasing complexity of system along with time consumption as compared to the prior techniques. By using the proposed methodology, the overall average epileptic seizure detection performance is increased to 99.6% with f-measure of 99% and G-mean of 98.9% values

    Advanced framework for epilepsy detection through image-based EEG signal analysis

    Get PDF
    BackgroundRecurrent and unpredictable seizures characterize epilepsy, a neurological disorder affecting millions worldwide. Epilepsy diagnosis is crucial for timely treatment and better outcomes. Electroencephalography (EEG) time-series data analysis is essential for epilepsy diagnosis and surveillance. Complex signal processing methods used in traditional EEG analysis are computationally demanding and difficult to generalize across patients. Researchers are using machine learning to improve epilepsy detection, particularly visual feature extraction from EEG time-series data.ObjectiveThis study examines the application of a Gramian Angular Summation Field (GASF) approach for the analysis of EEG signals. Additionally, it explores the utilization of image features, specifically the Scale-Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (ORB) techniques, for the purpose of epilepsy detection in EEG data.MethodsThe proposed methodology encompasses the transformation of EEG signals into images based on GASF, followed by the extraction of features utilizing SIFT and ORB techniques, and ultimately, the selection of relevant features. A state-of-the-art machine learning classifier is employed to classify GASF images into two categories: normal EEG patterns and focal EEG patterns. Bern-Barcelona EEG recordings were used to test the proposed method.ResultsThis method classifies EEG signals with 96% accuracy using SIFT features and 94% using ORB features. The Random Forest (RF) classifier surpasses state-of-the-art approaches in precision, recall, F1-score, specificity, and Area Under Curve (AUC). The Receiver Operating Characteristic (ROC) curve shows that Random Forest outperforms Support Vector Machine (SVM) and k-Nearest Neighbors (k-NN) classifiers.SignificanceThe suggested method has many advantages over time-series EEG data analysis and machine learning classifiers used in epilepsy detection studies. A novel image-based preprocessing pipeline using GASF for robust image synthesis and SIFT and ORB for feature extraction is presented here. The study found that the suggested method can accurately discriminate between normal and focal EEG signals, improving patient outcomes through early and accurate epilepsy diagnosis

    EVALUATION OF SPECTRAL INDICES FOR DETECTION OF BURNED AREAS IN AN ENVIRONMENTAL PROTECTION AREA USING SPOT-5 IMAGES

    Get PDF
    The Cerrado biome has great importance in ecological biodiversity, but deforestation has accelerated in recent decades. The Rio Preto-BA Environmental Preservation Area, known for the high agricultural activity of grains, continually suffers from forest fires and loss of native vegetation. Satellite remote sensing is proposed as an alternative to accurately locate and quantify fire-affected surfaces and their impacts on the landscape. With the recent free availability of SPOT-5 images, it has made it possible to detect burned areas with considerable spatial and spectral resolution. The objective of the present study was to determine the ideal spectral index for detection of the burned area inserted in the APA Rio Preto through two SPOT-5 scenes and the application of the Support Vector Machines (SVM) model. The burned areas detected were submitted to the calculation of five spectral indices, BAI, BAIM, NBR, NDVI and EVI2. The ability of each index to discriminate burned areas was estimated by comparing them with each other using a statistical separability index, SVM regression and accuracy analysis. BAIM was identified as the index with the greatest potential for discriminating burnt areas with maximum separability and classification accuracy above 90%, while the NDVI and EVI2 indices had low performance. It is hoped that these results can be used to evaluate and prioritize monitoring areas, contribute to the implementation of a fire management plan in the APA and to support subsequent studies on fire dynamics in forest systems integrated with advanced computational technologies

    Improving Multi-label Classification Performance on Imbalanced Datasets Through SMOTE Technique and Data Augmentation Using IndoBERT Model

    Get PDF
    Sentiment and emotion analysis is a common classification task aimed at enhancing the benefit and comfort of consumers of a product. However, the data obtained often lacks balance between each class or aspect to be analyzed, commonly known as an imbalanced dataset. Imbalanced datasets are frequently challenging in machine learning tasks, particularly text datasets. Our research tackles imbalanced datasets using two techniques, namely SMOTE and Augmentation. In the SMOTE technique, text datasets need to undergo numerical representation using TF-IDF. The classification model employed is the IndoBERT model. Both oversampling techniques can address data imbalance by generating synthetic and new data. The newly created dataset enhances the classification model's performance. With the Augmentation technique, the classification model's performance improves by up to 20%, with accuracy reaching 78%, precision at 85%, recall at 82%, and an F1-score of 83%. On the other hand, using the SMOTE technique, the evaluation results achieve the best values between the two techniques, enhancing the model's accuracy to a high 82% with precision at 87%, recall at 85%, and an F1-score of 86%

    Literature review on the smart city resources analysis with big data methodologies

    Get PDF
    This article provides a systematic literature review on applying different algorithms to municipal data processing, aiming to understand how the data were collected, stored, pre-processed, and analyzed, to compare various methods, and to select feasible solutions for further research. Several algorithms and data types are considered, finding that clustering, classification, correlation, anomaly detection, and prediction algorithms are frequently used. As expected, the data is of several types, ranging from sensor data to images. It is a considerable challenge, although several algorithms work very well, such as Long Short-Term Memory (LSTM) for timeseries prediction and classification.Open access funding provided by FCT|FCCN (b-on).info:eu-repo/semantics/publishedVersio

    Complexity & wormholes in holography

    Get PDF
    Holography has proven to be a highly successful approach in studying quantum gravity, where a non-gravitational quantum field theory is dual to a quantum gravity theory in one higher dimension. This doctoral thesis delves into two key aspects within the context of holography: complexity and wormholes. In Part I of the thesis, the focus is on holographic complexity. Beginning with a brief review of quantum complexity and its significance in holography, the subsequent two chapters proceed to explore this topic in detail. We study several proposals to quantify the costs of holographic path integrals. We then show how such costs can be optimized and match them to bulk complexity proposals already existing in the literature. In Part II of the thesis, we shift our attention to the study of spacetime wormholes in AdS/CFT. These are bulk spacetime geometries having two or more disconnected boundaries. In recent years, such wormholes have received a lot of attention as they lead to interesting implications and raise important puzzles. We study the construction of several simple examples of such wormholes in general dimensions in the presence of a bulk scalar field and explore their implications in the boundary theory

    On the Generation of Realistic and Robust Counterfactual Explanations for Algorithmic Recourse

    Get PDF
    This recent widespread deployment of machine learning algorithms presents many new challenges. Machine learning algorithms are usually opaque and can be particularly difficult to interpret. When humans are involved, algorithmic and automated decisions can negatively impact people’s lives. Therefore, end users would like to be insured against potential harm. One popular way to achieve this is to provide end users access to algorithmic recourse, which gives end users negatively affected by algorithmic decisions the opportunity to reverse unfavorable decisions, e.g., from a loan denial to a loan acceptance. In this thesis, we design recourse algorithms to meet various end user needs. First, we propose methods for the generation of realistic recourses. We use generative models to suggest recourses likely to occur under the data distribution. To this end, we shift the recourse action from the input space to the generative model’s latent space, allowing to generate counterfactuals that lie in regions with data support. Second, we observe that small changes applied to the recourses prescribed to end users likely invalidate the suggested recourse after being nosily implemented in practice. Motivated by this observation, we design methods for the generation of robust recourses and for assessing the robustness of recourse algorithms to data deletion requests. Third, the lack of a commonly used code-base for counterfactual explanation and algorithmic recourse algorithms and the vast array of evaluation measures in literature make it difficult to compare the per formance of different algorithms. To solve this problem, we provide an open source benchmarking library that streamlines the evaluation process and can be used for benchmarking, rapidly developing new methods, and setting up new experiments. In summary, our work contributes to a more reliable interaction of end users and machine learned models by covering fundamental aspects of the recourse process and suggests new solutions towards generating realistic and robust counterfactual explanations for algorithmic recourse
    • …
    corecore