17 research outputs found

    Ethnicity and Biometric Uniqueness: Iris Pattern Individuality in a West African Database

    Full text link
    We conducted more than 1.3 million comparisons of iris patterns encoded from images collected at two Nigerian universities, which constitute the newly available African Human Iris (AFHIRIS) database. The purpose was to discover whether ethnic differences in iris structure and appearance such as the textural feature size, as contrasted with an all-Chinese image database or an American database in which only 1.53% were of African-American heritage, made a material difference for iris discrimination. We measured a reduction in entropy for the AFHIRIS database due to the coarser iris features created by the thick anterior layer of melanocytes, and we found stochastic parameters that accurately model the relevant empirical distributions. Quantile-Quantile analysis revealed that a very small change in operational decision thresholds for the African database would compensate for the reduced entropy and generate the same performance in terms of resistance to False Matches. We conclude that despite demographic difference, individuality can be robustly discerned by comparison of iris patterns in this West African population.Comment: 8 pages, 8 Figure

    Hand geometry recognition: an approach for closed and separated fingers

    Get PDF
    Hand geometry has been a biometric trait that has attracted attention from several researchers. This stems from the fact that it is less intrusive and could be captured without contact with the acquisition device. Its application ranges from forensic examination to basic authentication use. However, restrictions in hand placement have proven to be one of its challenges. Users are either instructed to keep their fingers separate or closed during capture. Hence, this paper presents an approach to hand geometry using finger measurements that considers both closed and separate fingers. The system starts by cropping out the finger section of the hand and then resizing the cropped fingers. 20 distances were extracted from each finger in both separate and closed finger images. A comparison was made between Manhattan distance and Euclidean distance for features extraction. The support vector machine (SVM) was used for classification. The result showed a better result for Euclidean distance with a false acceptance ratio (FAR) of 0.6 and a false rejection ratio (FRR) of 1.2

    Survey dataset on open and distance learning students’ intention to use social media and emerging technologies for online facilitation

    Get PDF
    Open and Distance Learning (ODL) students rely majorly on the use of Information, Communication, and Technology (ICT) tools for online facilitation and other activities supporting learning. With an emphasis on ODL students of Ladoke Akintola University of Technology (LAUTECH), Oyo State, Nigeria; Moodle Learning Management System (LMS) has to be the major medium for online facilitation for the past 5 years. Therefore, this data article presents a survey dataset that was administered to LAUTECH ODL students with a view to as�sess their readiness to accept and use alternative social me�dia platforms and emerging technologies for online facilitation. The data article also includes a questionnaire instrument administered via Google form, 900 responses received in spreadsheet formats, chats generated from the responses, the Statistical Package of the Social Sciences (SPSS) file, and the descriptive and reliability statistics for all the variables. The authors believe that the dataset will guide policymakers on the choice of social media and emerging technologies to be adopted as facilitation tools for ODL students. It will also reveal the challenges that could militate against the willingness to use these supplementary modes of learning from students' perspective

    Exploring the Use of Biometric Smart Cards for Voters’ Accreditation: A Case Study of Nigeria Electoral Process

    Get PDF
    Voting remains an integral component of every democratic electoral process. it is an avenue for citizens to exercise their rights in order to elect those who will lead them in various vacant political offices. However, enhancing voters’ trust and confidence in electoral processes are significant factors that could encourage the active participation of citizens in elections. Eligible voters tend to decline to participate in an election when they have a feeling that their votes may not eventually count. Furthermore, electoral processes that lead to the emergence of candidates must be adjudged to be free, fair and credible to a high degree for the result to be widely acceptable. Unacceptable election results could lead to protests and total cancelation of the election thereby resulting in loss of time and resources invested in it. To ensure that only registered voters cast their votes on election days, measures must be put in place to accredit voters on election days effectively. Therefore, this article explores the use of biometric smart cards for voters’ verification and identification. With the Nigerian electoral process in view, the existing Nigerian voting procedure was reviewed, lapses were identified and solutions based on the use of the biometric smart card were proffered. If adopted, the proposed adoption of biometric smart cards for voters’ accreditation will enhance the country’s electoral process thereby ensuring that only registered voters cast their votes. The approach presented could also reduce the number of electoral processes and personnel required during election days, thus reducing voting time and cost

    Dataset to support the adoption of social media and emerging technologies for students’ continuous engagement

    Get PDF
    The recent advancements in ICT have made it possible for teaching and learning to be conducted outside the four walls of a University. Furthermore, the recent COVID-19 pandemic that has crippled educational activities in all nations of the world has further revealed the urgent need for academic institutions to embrace and integrate alternative modes of teaching and learning via social media platforms and emerging technologies into existing teaching tools. This article contains data collected from 850 face-to-face University students during the COVID-19 pandemic lockdown. An online Google form was used to elicit information from the students about their awareness and intention to use these alternative modes of teaching and learning. The questions were structured using the Unified Theory of Acceptance and Use of Technology (UTAUT) model. This data article includes the questionnaire used to retrieve the data, the responses obtained in spreadsheet format, the charts generated from the responses received, the Statistical Package of the Social Sciences (SPSS) file, the descriptive statistics, and the reliability analysis computed for all the UTAUT variables. The dataset will enhance understanding of how face-to-face students use social media platforms and how these platforms could be used to engage the students outside their classroom activities. Also, the dataset exposes how familiar face-to-face University students are with these emerging teaching and learning technologies. The challenges that could inhibit the adoption of these technologies were also revealed

    Enhanced Dataset of Digitized Screen-film Mammograms of African Descent

    No full text
    This dataset presents the enhanced version of digitized Screen-film Mammograms of African Descent. It contains mamographic images of 78 African cancer patient

    A hybrid deep learning technique for spoofing website URL detection in real-time applications

    No full text
    Abstract Website Uniform Resource Locator (URL) spoofing remains one of the ways of perpetrating phishing attacks in the twenty-first century. Hackers continue to employ URL spoofing to deceive naĂŻve and unsuspecting consumers into releasing important personal details in malicious websites. Blacklists and rule-based filters that were once effective at reducing the risks and sophistication of phishing are no longer effective as there are over 1.5 million new phishing websites created monthly. Therefore, research aimed at unveiling new techniques for detecting phishing websites has sparked a lot of interest in both academics and business with machine and deep learning techniques being at the forefront. Among the deep learning techniques that have been employed, Convolutional Neural Network (CNN) remains one of the most widely used with high performance in feature learning. However, CNN has a problem of memorizing contextual relationships in URL text, which makes it challenging to efficiently detect sophisticated malicious URLs in real-time applications. On the contrary, Long Short-Term Memory (LSTM) deep learning model has been successfully employed in complex real-time problems because of its ability to store inputs for a long period of time. This study experiments with the use of hybrid CNN and LSTM deep learning models for spoofing website URL detection in order to exploit the combined strengths of the two approaches for a more sophisticated spoofing URL detection. Two publicly available datasets (UCL spoofing Website and PhishTank Datasets) were used to evaluate the performance of the proposed hybrid model against other models in the literature. The hybrid CNN-LSTM model achieved accuracies of 98.9% and 96.8%, respectively, when evaluated using the UCL and PhishTank datasets. On the other hand, the standalone CNN and LSTM achieved accuracies of 90.4% and 94.6% on the UCL dataset, while their accuracies on the PhishTank dataset were 89.3% and 92.6%, respectively. The results show that the hybrid CNN-LSTM algorithm largely outperformed the standalone CNN and LSTM models, which demonstrates a much better performance. Therefore, the hybrid deep learning technique is recommended for detecting spoofing website URL thereby reducing losses attributed to such attacks

    Optimizing Android Malware Detection Via Ensemble Learning

    No full text
    Android operating system has become very popular, with the highest market share, amongst all other mobile operating systems due to its open source nature and users friendliness. This has brought about an uncontrolled rise in malicious applications targeting the Android platform. Emerging trends of Android malware are employing highly sophisticated detection and analysis avoidance techniques such that the traditional signature-based detection methods have become less potent in their ability to detect new and unknown malware. Alternative approaches, such as the Machine learning techniques have taken the lead for timely zero-day anomaly detections.  The study aimed at developing an optimized Android malware detection model using ensemble learning technique. Random Forest, Support Vector Machine, and k-Nearest Neighbours were used to develop three distinct base models and their predictive results were further combined using Majority Vote combination function to produce an ensemble model. Reverse engineering procedure was employed to extract static features from large repository of malware samples and benign applications. WEKA 3.8.2 data mining suite was used to perform all the learning experiments. The results showed that Random Forest had a true positive rate of 97.9%, a false positive rate of 1.9% and was able to correctly classify instances with 98%, making it a strong base model. The ensemble model had a true positive rate of 98.1%, false positive rate of 1.8% and was able to correctly classify instances with 98.16%. The finding shows that, although the base learners had good detection results, the ensemble learner produced a better optimized detection model compared with the performances of those of the base learners

    Optimizing Android Malware Detection Via Ensemble Learning

    No full text
    Android operating system has become very popular, with the highest market share, amongst all other mobile operating systems due to its open source nature and users friendliness. This has brought about an uncontrolled rise in malicious applications targeting the Android platform. Emerging trends of Android malware are employing highly sophisticated detection and analysis avoidance techniques such that the traditional signature-based detection methods have become less potent in their ability to detect new and unknown malware. Alternative approaches, such as the Machine learning techniques have taken the lead for timely zero-day anomaly detections.  The study aimed at developing an optimized Android malware detection model using ensemble learning technique. Random Forest, Support Vector Machine, and k-Nearest Neighbours were used to develop three distinct base models and their predictive results were further combined using Majority Vote combination function to produce an ensemble model. Reverse engineering procedure was employed to extract static features from large repository of malware samples and benign applications. WEKA 3.8.2 data mining suite was used to perform all the learning experiments. The results showed that Random Forest had a true positive rate of 97.9%, a false positive rate of 1.9% and was able to correctly classify instances with 98%, making it a strong base model. The ensemble model had a true positive rate of 98.1%, false positive rate of 1.8% and was able to correctly classify instances with 98.16%. The finding shows that, although the base learners had good detection results, the ensemble learner produced a better optimized detection model compared with the performances of those of the base learners.</p

    Implementation of a Framework for Healthy and Diabetic Retinopathy Retinal Image Recognition

    No full text
    The feature extraction stage remains a major component of every biometric recognition system. In most instances, the eventual accuracy of a recognition system is dependent on the features extracted from the biometric trait and the feature extraction technique adopted. The widely adopted technique employs features extracted from healthy retinal images in training retina recognition system. However, literature has shown that certain eye diseases such as diabetic retinopathy (DR), hypertensive retinopathy, glaucoma, and cataract could alter the recognition accuracy of the retina recognition system. This connotes that a robust retina recognition system should be designed to accommodate healthy and diseased retinal images. A framework with two different approaches for retina image recognition is presented in this study. The first approach employed structural features for healthy retinal image recognition while the second employed vascular and lesion-based features for DR retinal image recognition. Any input retinal image was first examined for the presence of DR symptoms before the appropriate feature extraction technique was adopted. Recognition rates of 100% and 97.23% were achieved for the healthy and DR retinal images, respectively, and a false acceptance rate of 0.0444 and a false rejection rate of 0.0133 were also achieved
    corecore