228 research outputs found

    A Novel Fingerprint Recognition and Verification System Using Swish Activation Based Gated Recurrent Unit and Optimal Feature Selection Mechanism

    Get PDF
    Using fingerprints in biometric systems is a rapidly expanding and pervasive field. The advancement of fingerprint identification as a computer technology for applications is directly linked to the latest developments in computer science. A kind of fingerprint identification algorithm has been made possible by artificial intelligence technology; particularly imaging technology based on deep learning. This paper proposes a novel fingerprint recognition and verification system using a Swish activation-based gated recurrent unit (SWAGRU) with an efficient feature selection mechanism. The system mainly includes four phases: preprocessing, feature extraction, feature selection, and fingerprint recognition. To begin, the fingerprint samples are collected from the publicly available FVC2004 database. After that, Gaussian filtering is applied to the collected dataset to suppress the noise. Then, the feature extraction is carried out with the help of Self-Attention-Based Visual Geometry Group-16 (SAVGG16), and from that, the optimal features are selected based on Cuckoo Search Optimization (CSO). Finally, the fingerprint recognition and verification are done using SWAGRU. The experimental results showed that the system outperformed existing methods in recognition performance

    Enhanced Fuzzy Feature Match Algorithm for Mehndi Fingerprints

    Get PDF
    The performance of biometric system is degraded by the distortions occurred in finger print image acquisition. This paper focuses on nonlinear distortions occurred due to �Mehndi / Heena drawn on the palm/fingers. The present invention is to detect and rectify such distortions using feedback paradigm. If image is of good quality, there is no need to renovate features. So, quality of whole image is checked by generating exponential similarity distribution. Quality of local region is checked by the ridge continuity map and ridge clarity map. Then, we check whether feedback is needed or not. The desired features such as ridge structure, minutiae point, orientation, etc. are renovated using feedback paradigm. Feedback is taken from top K matched template fingerprints registered in the database. Fuzzy logic handles uncertainties and imperfections in images. For matching, we have proposed the Enhanced Fuzzy Feature Match (EFFM) for estimating triangular feature set of distance between minutiae, orientation angle of minutiae, angle between the direction of minutiae points, angle between the interior bisector of triangle and the direction of minutiae, and a minutiae type. The proposed algorithm incorporates an additional parameter minutiae type that assists to improve accuracy of matching algorithm. The experimentation on 300 Mehndi fingerprints acquired using Secugen fingerprint scanner is conducted. The results positively support EEFM for its efficiency and reliability to handle distorted fingerprints matching

    Clustering Arabic Tweets for Sentiment Analysis

    Get PDF
    The focus of this study is to evaluate the impact of linguistic preprocessing and similarity functions for clustering Arabic Twitter tweets. The experiments apply an optimized version of the standard K-Means algorithm to assign tweets into positive and negative categories. The results show that root-based stemming has a significant advantage over light stemming in all settings. The Averaged Kullback-Leibler Divergence similarity function clearly outperforms the Cosine, Pearson Correlation, Jaccard Coefficient and Euclidean functions. The combination of the Averaged Kullback-Leibler Divergence and root-based stemming achieved the highest purity of 0.764 while the second-best purity was 0.719. These results are of importance as it is contrary to normal-sized documents where, in many information retrieval applications, light stemming performs better than root-based stemming and the Cosine function is commonly used

    Clustering Arabic Tweets for Sentiment Analysis

    Get PDF
    The focus of this study is to evaluate the impact of linguistic preprocessing and similarity functions for clustering Arabic Twitter tweets. The experiments apply an optimized version of the standard K-Means algorithm to assign tweets into positive and negative categories. The results show that root-based stemming has a significant advantage over light stemming in all settings. The Averaged Kullback-Leibler Divergence similarity function clearly outperforms the Cosine, Pearson Correlation, Jaccard Coefficient and Euclidean functions. The combination of the Averaged Kullback-Leibler Divergence and root-based stemming achieved the highest purity of 0.764 while the second-best purity was 0.719. These results are of importance as it is contrary to normal-sized documents where, in many information retrieval applications, light stemming performs better than root-based stemming and the Cosine function is commonly used

    Distributed incremental fingerprint identification with reduced database penetration rate using a hierarchical classification based on feature fusion and selection

    Get PDF
    Fingerprint recognition has been a hot research topic along the last few decades, with many applications and ever growing populations to identify. The need of flexible, fast identification systems is therefore patent in such situations. In this context, fingerprint classification is commonly used to improve the speed of the identification. This paper proposes a complete identification system with a hierarchical classification framework that fuses the information of multiple feature extractors. A feature selection is applied to improve the classification accuracy. Finally, the distributed identification is carried out with an incremental search, exploring the classes according to the probability order given by the classifier. A single parameter tunes the trade-off between identification time and accuracy. The proposal is evaluated over two NIST databases and a large synthetic database, yielding penetration rates close to the optimal values that can be reached with classification, leading to low identification times with small or no accuracy loss

    Selected Computing Research Papers Volume 1 June 2012

    Get PDF
    An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ..................................... 1 A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) .............................................................................. 7 An Evaluation of Current Intrusion Detection Systems Research (Gavin Alexander Burns) .................................................................................................... 13 An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ............ 19 A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ............................................................................................... 29 An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) .............. 39 An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ............................................................................................................... 45 An Empirical Study of Security Techniques Used In Online Banking (Rajinder D G Singh) .......................................................................................................... 51 A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) .................................................................................................... 5

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Development of quantitative structure property relationships to support non-target LC-HRMS screening

    Get PDF
    Κατά την τελευταία δεκαετία, ένας μεγάλος αριθμός αναδυόμενων ρύπων έχουν ανιχνευθεί και ταυτοποιηθεί σε επιφανειακά ύδατα και λύματα, προκαλώντας ανησυχία για το υδάτινο οικοσύστημα, λόγω της πιθανής χημικής τους σταθερότητας. Η τεχνική της υγροχρωματογραφίας - φασματομετρίας μάζας υψηλής διακριτικής ικανότητας (LC-HRMS) αποτελεί μια αποτελεσματική τεχνική για την ανίχνευση αναδυόμενων ρύπων στο περιβάλλον. Η ταυτόχρονη δε ανάλυση των δειγμάτων με τις συμπληρωματικές τεχνικές της υγροχρωματογραφίας αντίστροφης φάσης (RPLC) και της υγροχρωματογραφίας υδρόφιλων αλληλεπιδράσεων (HILIC), συντελεί στην ταυτοποίηση «ύποπτων» ή και άγνωστων ρύπων με ποικίλες φυσικοχημικές ιδιότητες. Για την ταυτοποίηση τους, απαιτείται να πληρούνται συγκεκριμένα κριτήρια, τα οποία αξιολογούνται με βάση τη χρήση διαγνωστικών εργαλείων, όπως η ακριβής πρόβλεψη του χρόνου ανάσχεσης, η in silico θραυσματοποίηση και η πρόβλεψη της συμπεριφορά τους στον ιοντισμό. Στο 3ο κεφάλαιο της παρούσας διδακτορικής διατριβής περιγράφεται η ανάπτυξη μιας ολοκληρωμένης πορείας εργασίας (workflow) για τη διερεύνηση των παραμέτρων που επηρεάζουν τον χρόνο έκλουσης μεγάλου αριθμού ενώσεων που συγκαταλέγονται στους αναδυόμενους ρύπους. Για τον σκοπό αυτό, πάνω από 2.500 αναδυόμενοι ρύποι χρησιμοποιήθηκαν για την ανάπτυξη του μοντέλου πρόβλεψης χρόνου ανάσχεσης για τις 2 υγροχρωματογραφικές τεχνικές (RP- και HILIC-LC-HRMS) και για ηλεκτροψεκασμό τόσο σε θετικό όσο και σε αρνητικό ιοντισμό (+/-ESI). Στη συνέχεια, πραγματοποιήθηκε εφαρμογή του μοντέλου για την υπολογιστική πρόβλεψη του χρόνου ανάσχεσης, για την ταυτοποίηση 10 νέων προϊόντων μετασχματισμού των φαρμακευτικών ενώσεων (tramadol, furosemide και niflumic acid) ύστερα από επεξεργασία με όζον. Στο 4ο κεφάλαιο παρουσιάζεται η ανάπτυξη ενός καινοτόμου γενικευμένου χημειομετρικού μοντέλου το οποίο είναι ικανό να προβλέπει τον χρόνο έκλουσης κάθε πιθανού ρύπου, ανεξαρτήτου υγροχρωματογραφικής μεθόδου που χρησιμοποιείται, συμβάλλοντας σημαντικά στην σύγκριση αποτελεσμάτων από διαφορετικές LC-HRMS μεθόδους. Το συγκεκριμένο μοντέλο χρησιμοποιήθηκε για την ταυτοποίηση «ύποπτων» και άγνωστων ενώσεων σε διεργαστηριακές δοκιμές. Το Κεφάλαιο 5, περιέχει την περιγραφή της ανάπτυξης ενός υπολογιστικού μοντέλου πρόβλεψης τοξικότητας αναδυόμενων ρύπων που ανιχνεύονται στο υδάτινο οικοσύστημα. Το συγκεκριμένο μοντέλο αποσκοπεί στην εκτίμηση του πιθανού περιβαλλοντικού κινδύνου για νέες ενώσεις που ταυτοποιήθηκαν μέσω σάρωσης «ύποπτων» ενώσεων και μη-στοχευμένης σάρωσης, για τις οποίες δεν είναι ακόμα διαθέσιμα πειραματικά δεδομένα τοξικότητας. Τέλος, στο κεφάλαιο 6 παρουσιάζεται ένας αυτοματοποιημένος και συστηματικός τρόπος σάρωσης «ύποπτων» ενώσεων και μη-στοχευμένης σάρωσης σε δεδομένα από LC-HRMS. Η νέα αυτή αυτοματοποιημένη πορεία εργασίας, αποσκοπεί στην λιγότερο χρονοβόρα επεξεργασία των HRMS δεδομένων, και στην εφαρμογή της μη-στοχευμένης σάρωσης ώστε να είναι δυνατή η εφαρμογή τους σε καθημερινούς ελέγχους ρουτίνας ή/και για χρήση από τις κανονιστικές αρχές.Over the last decade, a high number of emerging contaminants were detected and identified in surface and waste waters that could threaten the aquatic environment due to their pseudo-persistence. As it is described in chapters 1 and 2, liquid chromatography high resolution mass spectroscopy (LC-HRMS) can be used as an efficient tool for their screening. Simultaneously screening of these samples by hydrophilic interaction liquid chromatography (HILIC) and reversed phase (RP) would help with full identification of suspects and unknown compounds. However, to confirm the identity of the most relevant suspect or unknown compounds, their chemical properties such as retention time behavior, MSn fragmentation and ionization modes should be investigated. Chapter 3 of this thesis discusses the development of a comprehensive workflow to study the retention time behavior of large groups of compounds belonging to emerging contaminants. A dataset consisted of more than 2500 compounds was used for RP/HILIC-LC-HRMS, and their retention times were derived in both Electrospray Ionization mode (+/-ESI). These in silico approaches were then applied on the identification of 10 new transformation products of tramadol, furosemide and niflumic acid (under ozonation treatment). Chapter 4 discusses about the development of a first retention time index system for LC-HRMS. Some practical applications of this RTI system in suspect and non-target screening in collaborative trials have been presented as well. Chapter 5 describes the development of in silico based toxicity models to estimate the acute toxicity of emerging pollutants in the aquatic environment. This would help link the suspect/non-target screening results to the tentative environmental risk by predicting the toxicity of newly tentatively identified compounds. Chapter 6 introduces an automatic and systematic way to perform suspect and non-target screening in LC-HRMS data. This would save time and the data analysis loads and enable the routine application of non-target screening for regulatory or monitoring purpose

    Dominant points detection for shape analysis

    Get PDF
    The growing interest in recent years towards the multimedia and the large amount of information exchanged across the network involves the various fields of research towards the study of methods for automatic identification. One of the main objectives is to associate the information content of images, using techniques for identifying composing objects. Among image descriptors, contours reveal are very important because most of the information can be extracted from them and the contour analysis offers a lower computational complexity also. The contour analysis can be restricted to the study of some salient points with high curvature from which it is possible to reconstruct the original contour. The thesis is focused on the polygonal approximation of closed digital curves. After an overview of the most common shape descriptors, distinguished between simple descriptors and external methods, that focus on the analysis of boundary points of objects, and internal methods, which use the pixels inside the object also, a description of the major methods regarding the extraction of dominant points studied so far and the metrics typically used to evaluate the goodness of the polygonal approximation found is given. Three novel approaches to the problem are then discussed in detail: a fast iterative method (DPIL), more suitable for realtime processing, and two metaheuristics methods (GAPA, ACOPA) based on genetic algorithms and Ant Colony Optimization (ACO), more com- plex from the point of view of the calculation, but more precise. Such techniques are then compared with the other main methods cited in literature, in order to assess the performance in terms of computational complexity and polygonal approximation error, and measured between them, in order to evaluate the robustness with respect to affine transformations and conditions of noise. Two new techniques of shape matching, i.e. identification of objects belonging to the same class in a database of images, are then described. The first one is based on the shape alignment and the second is based on a correspondence by ACO, which puts in evidence the excellent results, both in terms of computational time and recognition accuracy, obtained through the use of dominant points. In the first matching algorithm the results are compared with a selection of dominant points generated by a human operator while in the second the dominant points are used instead of a constant sampling of the outline typically used for this kind of approach
    corecore