90 research outputs found

    Measuring memetic algorithm performance on image fingerprints dataset

    Get PDF
    Personal identification has become one of the most important terms in our society regarding access control, crime and forensic identification, banking and also computer system. The fingerprint is the most used biometric feature caused by its unique, universality and stability. The fingerprint is widely used as a security feature for forensic recognition, building access, automatic teller machine (ATM) authentication or payment. Fingerprint recognition could be grouped in two various forms, verification and identification. Verification compares one on one fingerprint data. Identification is matching input fingerprint with data that saved in the database. In this paper, we measure the performance of the memetic algorithm to process the image fingerprints dataset. Before we run this algorithm, we divide our fingerprints into four groups according to its characteristics and make 15 specimens of data, do four partial tests and at the last of work we measure all computation time

    A Neural Network Approach to Identify Hyperspectral Image Content

    Get PDF
    A Hyperspectral is the imaging technique that contains very large dimension data with the hundreds of channels. Meanwhile, the Hyperspectral Images (HISs) delivers the complete knowledge of imaging; therefore applying a classification algorithm is very important tool for practical uses. The HSIs are always having a large number of correlated and redundant feature, which causes the decrement in the classification accuracy; moreover, the features redundancy come up with some extra burden of computation that without adding any beneficial information to the classification accuracy. In this study, an unsupervised based Band Selection Algorithm (BSA) is considered with the Linear Projection (LP) that depends upon the metric-band similarities. Afterwards Monogenetic Binary Feature (MBF) has consider to perform the ‘texture analysis’ of the HSI, where three operational component represents the monogenetic signal such as; phase, amplitude and orientation. In post processing classification stage, feature-mapping function can provide important information, which help to adopt the Kernel based Neural Network (KNN) to optimize the generalization ability. However, an alternative method of multiclass application can be adopt through KNN, if we consider the multi-output nodes instead of taking single-output node

    Optimized kernel minimum noise fraction transformation for hyperspectral image classification

    Get PDF
    This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy

    Optimal Use of Multi-spectral Satellite Data with Convolutional Neural Networks

    Full text link
    The analysis of satellite imagery will prove a crucial tool in the pursuit of sustainable development. While Convolutional Neural Networks (CNNs) have made large gains in natural image analysis, their application to multi-spectral satellite images (wherein input images have a large number of channels) remains relatively unexplored. In this paper, we compare different methods of leveraging multi-band information with CNNs, demonstrating the performance of all compared methods on the task of semantic segmentation of agricultural vegetation (vineyards). We show that standard industry practice of using bands selected by a domain expert leads to a significantly worse test accuracy than the other methods compared. Specifically, we compare: using bands specified by an expert; using all available bands; learning attention maps over the input bands; and leveraging Bayesian optimisation to dictate band choice. We show that simply using all available band information already increases test time performance, and show that the Bayesian optimisation, first applied to band selection in this work, can be used to further boost accuracy.Comment: AI for Social Good workshop - Harvard CRC

    Applications of Nature-Inspired Algorithms for Dimension Reduction: Enabling Efficient Data Analytics

    Get PDF
    In [1], we have explored the theoretical aspects of feature selection and evolutionary algorithms. In this chapter, we focus on optimization algorithms for enhancing data analytic process, i.e., we propose to explore applications of nature-inspired algorithms in data science. Feature selection optimization is a hybrid approach leveraging feature selection techniques and evolutionary algorithms process to optimize the selected features. Prior works solve this problem iteratively to converge to an optimal feature subset. Feature selection optimization is a non-specific domain approach. Data scientists mainly attempt to find an advanced way to analyze data n with high computational efficiency and low time complexity, leading to efficient data analytics. Thus, by increasing generated/measured/sensed data from various sources, analysis, manipulation and illustration of data grow exponentially. Due to the large scale data sets, Curse of dimensionality (CoD) is one of the NP-hard problems in data science. Hence, several efforts have been focused on leveraging evolutionary algorithms (EAs) to address the complex issues in large scale data analytics problems. Dimension reduction, together with EAs, lends itself to solve CoD and solve complex problems, in terms of time complexity, efficiently. In this chapter, we first provide a brief overview of previous studies that focused on solving CoD using feature extraction optimization process. We then discuss practical examples of research studies are successfully tackled some application domains, such as image processing, sentiment analysis, network traffics / anomalies analysis, credit score analysis and other benchmark functions/data sets analysis

    A Review of Principal Component Analysis Algorithm for Dimensionality Reduction

    Get PDF
    Big databases are increasingly widespread and are therefore hard to understand, in exploratory biomedicine science, big data in health research is highly exciting because data-based analyses can travel quicker than hypothesis-based research. Principal Component Analysis (PCA) is a method to reduce the dimensionality of certain datasets. Improves interpretability but without losing much information. It achieves this by creating new covariates that are not related to each other. Finding those new variables, or what we call the main components, will reduce the eigenvalue /eigenvectors solution problem. (PCA) can be said to be an adaptive data analysis technology because technology variables are developed to adapt to different data types and structures. This review will start by introducing the basic ideas of (PCA), describe some concepts related to (PCA), and discussing. What it can do, and reviewed fifteen articles of (PCA) that have been introduced and published in the last three years

    Detecting Clouds in Multispectral Satellite Images Using Quantum-Kernel Support Vector Machines

    Get PDF
    Support vector machines (SVMs) are a well-established classifier effectively deployed in an array of classification tasks. In this work, we consider extending classical SVMs with quantum kernels and applying them to satellite data analysis. The design and implementation of SVMs with quantum kernels (hybrid SVMs) are presented. Here, the pixels are mapped to the Hilbert space using a family of parameterized quantum feature maps (related to quantum kernels). The parameters are optimized to maximize the kernel target alignment. The quantum kernels have been selected such that they enabled analysis of numerous relevant properties while being able to simulate them with classical computers on a real-life large-scale dataset. Specifically, we approach the problem of cloud detection in the multispectral satellite imagery, which is one of the pivotal steps in both on-the-ground and on-board satellite image analysis processing chains. The experiments performed over the benchmark Landsat-8 multispectral dataset revealed that the simulated hybrid SVM successfully classifies satellite images with accuracy comparable to the classical SVM with the RBF kernel for large datasets. Interestingly, for large datasets, the high accuracy was also observed for the simple quantum kernels, lacking quantum entanglement.Comment: 12 pages, 10 figure
    • …
    corecore