736 research outputs found

    Heuristic Evaluation and Usability Testing of G-MoMo Applications

    Get PDF
    Financial technology (FinTech) has swiftly revolutionized mobile money as one of the ways of accessing financial services in developing countries. Numerous mobile money applications were developed to access mobile money services but are hindered by severe authentication security challenges, thus, forcing the researchers to design a secure multi-factor authentication (MFA) algorithm for mobile money applications. Three prototypes of native mobile money applications (G-MoMo applications) were developed to confirm that the algorithm provides high security and is feasible. This study, therefore, aimed to evaluate the usability of the G-MoMo applications using heuristic evaluation and usability testing to identify potential usability issues and provide recommendations for improvement. Heuristic evaluation and usability testing methods were used to evaluate the G-MoMo applications. The heuristic evaluation was carried out by five experts that used the 10 principles proposed by Jakob Nielsen with a five-point severity rating scale to identify the usability problems. While the usability testing was conducted with forty participants selected using a purposive sampling method to validate the usability of the G-MoMo applications by performing tasks and filling out the post-test questionnaire. Data collected were analyzed in RStudio software. Sixty-three usability issues were identified during heuristic evaluation, where 33 were minor and 30 were major. The most violated heuristic items were “help and documentation”, and “user control and freedom”, while the least violated heuristic items were “aesthetic and minimalist design” and “visibility of system status”. The usability testing findings revealed that the G-MoMo applications’ performance proved good in learnability, effectiveness, efficiency, memorability, and errors. It also provided user satisfaction, ease of use, aesthetics, usefulness, integration, and understandability. Therefore, it was highly recommended that the developers of G-MoMo applications fix the identified usability problems to make the applications more reliable and increase overall user satisfaction.info:eu-repo/semantics/publishedVersio

    Usability and Trust in Information Systems

    Get PDF
    The need for people to protect themselves and their assets is as old as humankind. People's physical safety and their possessions have always been at risk from deliberate attack or accidental damage. The advance of information technology means that many individuals, as well as corporations, have an additional range of physical (equipment) and electronic (data) assets that are at risk. Furthermore, the increased number and types of interactions in cyberspace has enabled new forms of attack on people and their possessions. Consider grooming of minors in chat-rooms, or Nigerian email cons: minors were targeted by paedophiles before the creation of chat-rooms, and Nigerian criminals sent the same letters by physical mail or fax before there was email. But the technology has decreased the cost of many types of attacks, or the degree of risk for the attackers. At the same time, cyberspace is still new to many people, which means they do not understand risks, or recognise the signs of an attack, as readily as they might in the physical world. The IT industry has developed a plethora of security mechanisms, which could be used to mitigate risks or make attacks significantly more difficult. Currently, many people are either not aware of these mechanisms, or are unable or unwilling or to use them. Security experts have taken to portraying people as "the weakest link" in their efforts to deploy effective security [e.g. Schneier, 2000]. However, recent research has revealed at least some of the problem may be that security mechanisms are hard to use, or be ineffective. The review summarises current research on the usability of security mechanisms, and discusses options for increasing their usability and effectiveness

    Fingerprint recognition based on shark smell optimization and genetic algorithm

    Get PDF
    Fingerprint recognition is a dominant form of biometric due to its distinctiveness. The study aims to extract and select the best features of fingerprint images, and evaluate the strength of the Shark Smell Optimization (SSO) and Genetic Algorithm (GA) in the search space with a chosen set of metrics. The proposed model consists of seven phases namely, enrollment, image preprocessing by using weighted median filter, feature extraction by using SSO, weight generation by using Chebyshev polynomial first kind (CPFK), feature selection by using GA, creation of a user’s database, and matching features by using Euclidean distance (ED). The effectiveness of the proposed model’s algorithms and performance is evaluated on 150 real fingerprint images that were collected from university students by the ZKTeco scanner at Sulaimani city, Iraq. The system’s performance was measured by three renowned error rate metrics, namely, False Acceptance Rate (FAR), False Rejection Rate (FRR), and Correct Verification Rate (CVR). The experimental outcome showed that the proposed fingerprint recognition model was exceedingly accurate recognition because of a low rate of both FAR and FRR, with a high CVR percentage gained which was 0.00, 0.00666, and 99.334%, respectively. This finding would be useful for improving biometric secure authentication based fingerprint. It is also possibly applied to other research topics such as fraud detection, e-payment, and other real-life applications authentication

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Facilitating sensor interoperability and incorporating quality in fingerprint matching systems

    Get PDF
    This thesis addresses the issues of sensor interoperability and quality in the context of fingerprints and makes a three-fold contribution. The first contribution is a method to facilitate fingerprint sensor interoperability that involves the comparison of fingerprint images originating from multiple sensors. The proposed technique models the relationship between images acquired by two different sensors using a Thin Plate Spline (TPS) function. Such a calibration model is observed to enhance the inter-sensor matching performance on the MSU dataset containing images from optical and capacitive sensors. Experiments indicate that the proposed calibration scheme improves the inter-sensor Genuine Accept Rate (GAR) by 35% to 40% at a False Accept Rate (FAR) of 0.01%. The second contribution is a technique to incorporate the local image quality information in the fingerprint matching process. Experiments on the FVC 2002 and 2004 databases suggest the potential of this scheme to improve the matching performance of a generic fingerprint recognition system. The final contribution of this thesis is a method for classifying fingerprint images into 3 categories: good, dry and smudged. Such a categorization would assist in invoking different image processing or matching schemes based on the nature of the input fingerprint image. A classification rate of 97.45% is obtained on a subset of the FVC 2004 DB1 database

    Selected Computing Research Papers Volume 1 June 2012

    Get PDF
    An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ..................................... 1 A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) .............................................................................. 7 An Evaluation of Current Intrusion Detection Systems Research (Gavin Alexander Burns) .................................................................................................... 13 An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ............ 19 A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ............................................................................................... 29 An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) .............. 39 An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ............................................................................................................... 45 An Empirical Study of Security Techniques Used In Online Banking (Rajinder D G Singh) .......................................................................................................... 51 A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) .................................................................................................... 5

    Lightweight Three-Factor Authentication and Key Agreement Protocol for Internet-Integrated Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) will be integrated into the future Internet as one of the components of the Internet of Things, and will become globally addressable by any entity connected to the Internet. Despite the great potential of this integration, it also brings new threats, such as the exposure of sensor nodes to attacks originating from the Internet. In this context, lightweight authentication and key agreement protocols must be in place to enable end-to-end secure communication. Recently, Amin et al. proposed a three-factor mutual authentication protocol for WSNs. However, we identified several flaws in their protocol. We found that their protocol suffers from smart card loss attack where the user identity and password can be guessed using offline brute force techniques. Moreover, the protocol suffers from known session-specific temporary information attack, which leads to the disclosure of session keys in other sessions. Furthermore, the protocol is vulnerable to tracking attack and fails to fulfill user untraceability. To address these deficiencies, we present a lightweight and secure user authentication protocol based on the Rabin cryptosystem, which has the characteristic of computational asymmetry. We conduct a formal verification of our proposed protocol using ProVerif in order to demonstrate that our scheme fulfills the required security properties. We also present a comprehensive heuristic security analysis to show that our protocol is secure against all the possible attacks and provides the desired security features. The results we obtained show that our new protocol is a secure and lightweight solution for authentication and key agreement for Internet-integrated WSNs

    Novel parallel approaches to efficiently solve spatial problems on heterogeneous CPU-GPU systems

    Get PDF
    Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case. In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case. The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas.In recent years, approaches that seek to extract valuable information from large datasets have become particularly relevant in today's society. In this category, we can highlight those problems that comprise data analysis distributed across two-dimensional scenarios called spatial problems. These usually involve processing (i) a series of features distributed across a given plane or (ii) a matrix of values where each cell corresponds to a point on the plane. Therefore, we can see the open-ended and complex nature of spatial problems, but it also leaves room for imagination to be applied in the search for new solutions. One of the main complications we encounter when dealing with spatial problems is that they are very computationally intensive, typically taking a long time to produce the desired result. This drawback is also an opportunity to use heterogeneous systems to address spatial problems more efficiently. Heterogeneous systems give the developer greater freedom to speed up suitable algorithms by increasing the parallel programming options available, making it possible for different parts of a program to run on the dedicated hardware that suits them best. Several of the spatial problems that have not been optimised for heterogeneous systems cover very diverse areas that seem vastly different at first sight. However, they are closely related due to common data processing requirements, making them suitable for using dedicated hardware. In particular, this thesis provides new parallel approaches to tackle the following three crucial spatial problems: latent fingerprint identification, total viewshed computation, and path planning based on maximising visibility in large regions. Latent fingerprint identification is one of the essential identification procedures in criminal investigations. Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case. In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case. The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas
    • 

    corecore