53 research outputs found

    Sample supervised search centric approaches in geographic object-based image analysis

    Get PDF
    Sample supervised search centric image segmentation denotes a general method where quality segments are generated based on the provision of a selection of reference segments. The main purpose of such a method is to correctly segment a multitude of identical elements in an image based on these reference segments. An efficient search algorithm traverses the parameter space of a given segmentation algorithm. A supervised quality measure guides the search for the best segmentation results, or rather the best performing parameter set. This method, which is academically pursued in the context of remote sensing and elsewhere, shows promise in assisting the generation of earth observation information products. The method may find applications specifically within the context of user driven geographic object-based image analysis approaches, mainly in respect of very high resolution optical data. Rapid mapping activities as well as general land-cover mapping or targeted element identification may benefit from such a method. In this work it is suggested that sample supervised search centric geographic segment generation forms the basis of a set of methods, or rather a methodological avenue. The original formulation of the method, although promising, is limited in the quality of the segments it can produce – it is still limited by the inherent capability of the given segmentation algorithm. From an optimisation viewpoint, various structures may be encoded forming the fitness or search landscape traversed by a given search algorithm. These structures may interact or have an interplay with the given segmentation algorithm. Various method variants considering expanded fitness landscapes are possible. Additional processes, or constituents, such as data mapping, classification and post-segmentation heuristics may be embedded into such a method. Three distinct and novel method variants are proposed and evaluated based on this concept of expanded fitness landscapes

    An improved algorithm for identifying shallow and deep-seated landslides in dense tropical forest from airborne laser scanning data

    Full text link
    © 2018 Landslides are natural disasters that cause environmental and infrastructure damage worldwide. They are difficult to be recognized, particularly in densely vegetated regions of the tropical forest areas. Consequently, an accurate inventory map is required to analyze landslides susceptibility, hazard, and risk. Several studies were done to differentiate between different types of landslide (i.e. shallow and deep-seated); however, none of them utilized any feature selection techniques. Thus, in this study, three feature selection techniques were used (i.e. correlation-based feature selection (CFS), random forest (RF), and ant colony optimization (ACO)). A fuzzy-based segmentation parameter (FbSP optimizer) was used to optimize the segmentation parameters. Random forest (RF) was used to evaluate the performance of each feature selection algorithms. The overall accuracies of the RF classifier revealed that CFS algorithm exhibited higher ranks in differentiation landslide types. Moreover, the results of the transferability showed that this method is easy, accurate, and highly suitable for differentiating between types of landslides (shallow and deep-seated). In summary, the study recommends that the outlined approaches are significant to improve in distinguishing between shallow and deep-seated landslide in the tropical areas, such as; Malaysia

    An improved data classification framework based on fractional particle swarm optimization

    Get PDF
    Particle Swarm Optimization (PSO) is a population based stochastic optimization technique which consist of particles that move collectively in iterations to search for the most optimum solutions. However, conventional PSO is prone to lack of convergence and even stagnation in complex high dimensional-search problems with multiple local optima. Therefore, this research proposed an improved Mutually-Optimized Fractional PSO (MOFPSO) algorithm based on fractional derivatives and small step lengths to ensure convergence to global optima by supplying a fine balance between exploration and exploitation. The proposed algorithm is tested and verified for optimization performance comparison on ten benchmark functions against six existing established algorithms in terms of Mean of Error and Standard Deviation values. The proposed MOFPSO algorithm demonstrated lowest Mean of Error values during the optimization on all benchmark functions through all 30 runs (Ackley = 0.2, Rosenbrock = 0.2, Bohachevsky = 9.36E-06, Easom = -0.95, Griewank = 0.01, Rastrigin = 2.5E-03, Schaffer = 1.31E-06, Schwefel 1.2 = 3.2E-05, Sphere = 8.36E-03, Step = 0). Furthermore, the proposed MOFPSO algorithm is hybridized with Back-Propagation (BP), Elman Recurrent Neural Networks (RNN) and Levenberg-Marquardt (LM) Artificial Neural Networks (ANNs) to propose an enhanced data classification framework, especially for data classification applications. The proposed classification framework is then evaluated for classification accuracy, computational time and Mean Squared Error on five benchmark datasets against seven existing techniques. It can be concluded from the simulation results that the proposed MOFPSO-ERNN classification algorithm demonstrated good classification performance in terms of classification accuracy (Breast Cancer = 99.01%, EEG = 99.99%, PIMA Indian Diabetes = 99.37%, Iris = 99.6%, Thyroid = 99.88%) as compared to the existing hybrid classification techniques. Hence, the proposed technique can be employed to improve the overall classification accuracy and reduce the computational time in data classification applications

    Efficient thermal face recognition method using optimized curvelet features for biometric authentication

    Get PDF
    Biometric technology is becoming increasingly prevalent in several vital applications that substitute traditional password and token authentication mechanisms. Recognition accuracy and computational cost are two important aspects that are to be considered while designing biometric authentication systems. Thermal imaging is proven to capture a unique thermal signature for a person and thus has been used in thermal face recognition. However, the literature did not thoroughly analyse the impact of feature selection on the accuracy and computational cost of face recognition which is an important aspect for limited resources applications like IoT ones. Also, the literature did not thoroughly evaluate the performance metrics of the proposed methods/solutions which are needed for the optimal configuration of the biometric authentication systems. This paper proposes a thermal face-based biometric authentication system. The proposed system comprises five phases: a) capturing the user’s face with a thermal camera, b) segmenting the face region and excluding the background by optimized superpixel-based segmentation technique to extract the region of interest (ROI) of the face, c) feature extraction using wavelet and curvelet transform, d) feature selection by employing bio-inspired optimization algorithms: grey wolf optimizer (GWO), particle swarm optimization (PSO) and genetic algorithm (GA), e) the classification (user identification) performed using classifiers: random forest (RF), k-nearest neighbour (KNN), and naive bayes (NB). Upon the public dataset, Terravic Facial IR, the proposed system was evaluated using the metrics: accuracy, precision, recall, F-measure, and receiver operating characteristic (ROC) area. The results showed that the curvelet features optimized using the GWO and classified with random forest could help in authenticating users through thermal images with performance up to 99.5% which is better than the results of wavelet features by 10% while the former used 5% fewer features. In addition, the statistical analysis showed the significance of our proposed model. Compared to the related works, our system showed to be a better thermal face authentication model with a minimum set of features, making it computational-friendly

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    On the use of Artificial Neural Networks in Topology Optimisation

    Full text link
    The question of how methods from the field of artificial intelligence can help improve the conventional frameworks for topology optimisation has received increasing attention over the last few years. Motivated by the capabilities of neural networks in image analysis, different model-variations aimed at obtaining iteration-free topology optimisation have been proposed with varying success. Other works focused on speed-up through replacing expensive optimisers and state solvers, or reducing the design-space have been attempted, but have not yet received the same attention. The portfolio of articles presenting different applications has as such become extensive, but few real breakthroughs have yet been celebrated. An overall trend in the literature is the strong faith in the "magic" of artificial intelligence and thus misunderstandings about the capabilities of such methods. The aim of this article is therefore to present a critical review of the current state of research in this field. To this end, an overview of the different model-applications is presented, and efforts are made to identify reasons for the overall lack of convincing success. A thorough analysis identifies and differentiates between problematic and promising aspects of existing models. The resulting findings are used to detail recommendations believed to encourage avenues of potential scientific progress for further research within the field.Comment: 36 pages, 7 figures (13 figures counting sub-figures), accepted for publication in Structural and Multidisciplinary Optimizatio

    Meta Heuristics based Machine Learning and Neural Mass Modelling Allied to Brain Machine Interface

    Get PDF
    New understanding of the brain function and increasing availability of low-cost-non-invasive electroencephalograms (EEGs) recording devices have made brain-computer-interface (BCI) as an alternative option to augmentation of human capabilities by providing a new non-muscular channel for sending commands, which could be used to activate electronic or mechanical devices based on modulation of thoughts. In this project, our emphasis will be on how to develop such a BCI using fuzzy rule-based systems (FRBSs), metaheuristics and Neural Mass Models (NMMs). In particular, we treat the BCI system as an integrated problem consisting of mathematical modelling, machine learning and classification. Four main steps are involved in designing a BCI system: 1) data acquisition, 2) feature extraction, 3) classification and 4) transferring the classification outcome into control commands for extended peripheral capability. Our focus has been placed on the first three steps. This research project aims to investigate and develop a novel BCI framework encompassing classification based on machine learning, optimisation and neural mass modelling. The primary aim in this project is to bridge the gap of these three different areas in a bid to design a more reliable and accurate communication path between the brain and external world. To achieve this goal, the following objectives have been investigated: 1) Steady-State Visual Evoked Potential (SSVEP) EEG data are collected from human subjects and pre-processed; 2) Feature extraction procedure is implemented to detect and quantify the characteristics of brain activities which indicates the intention of the subject.; 3) a classification mechanism called an Immune Inspired Multi-Objective Fuzzy Modelling Classification algorithm (IMOFM-C), is adapted as a binary classification approach for classifying binary EEG data. Then, the DDAG-Distance aggregation approach is proposed to aggregate the outcomes of IMOFM-C based binary classifiers for multi-class classification; 4) building on IMOFM-C, a preference-based ensemble classification framework known as IMOFM-CP is proposed to enhance the convergence performance and diversity of each individual component classifier, leading to an improved overall classification accuracy of multi-class EEG data; and 5) finally a robust parameterising approach which combines a single-objective GA and a clustering algorithm with a set of newly devised objective and penalty functions is proposed to obtain robust sets of synaptic connectivity parameters of a thalamic neural mass model (NMM). The parametrisation approach aims to cope with nonlinearity nature normally involved in describing multifarious features of brain signals
    • …
    corecore