1,070 research outputs found

    A Review of the Family of Artificial Fish Swarm Algorithms: Recent Advances and Applications

    Full text link
    The Artificial Fish Swarm Algorithm (AFSA) is inspired by the ecological behaviors of fish schooling in nature, viz., the preying, swarming, following and random behaviors. Owing to a number of salient properties, which include flexibility, fast convergence, and insensitivity to the initial parameter settings, the family of AFSA has emerged as an effective Swarm Intelligence (SI) methodology that has been widely applied to solve real-world optimization problems. Since its introduction in 2002, many improved and hybrid AFSA models have been developed to tackle continuous, binary, and combinatorial optimization problems. This paper aims to present a concise review of the family of AFSA, encompassing the original ASFA and its improvements, continuous, binary, discrete, and hybrid models, as well as the associated applications. A comprehensive survey on the AFSA from its introduction to 2012 can be found in [1]. As such, we focus on a total of {\color{blue}123} articles published in high-quality journals since 2013. We also discuss possible AFSA enhancements and highlight future research directions for the family of AFSA-based models.Comment: 37 pages, 3 figure

    Modified and Ensemble Intelligent Water Drop Algorithms and Their Applications

    Get PDF
    1.1 Introduction Optimization is a process that concerns with finding the best solution of a given problem from among the possible solutions within an affordable time and cost (Weise et al., 2009). The first step in the optimization process is formulating the optimization problem through an objective function and a set of constrains that encompass the problem search space (ie, regions of feasible solutions). Every alternative (ie, solution) is represented by a set of decision variables. Each decision variable has a domain, which is a representation of the set of all possible values that the decision variable can take. The second step in optimization starts by utilizing an optimization method (ie, search method) to find the best candidate solutions. Candidate solution has a configuration of decision variables that satisfies the set of problem constrains, and that maximizes or minimizes the objective function (Boussaid et al., 2013). It converges to the optimal solution (ie, local or global optimal solution) by reaching the optimal values of the decision variables. Figure 1.1 depicts a 3D-fitness landscape of an optimization problem. It shows the concept of the local and global optima, where the local optimal solution is not necessarily the same as the global one (Weise et al., 2009). Optimization can be applied to many real-world problems in various domains. As an example, mathematicians apply optimization methods to identify the best outcome pertaining to some mathematical functions within a range of variables (Vesterstrom and Thomsen, 2004). In the presence of conflicting criteria, engineers use optimization methods t

    Applications of Nature-Inspired Algorithms for Dimension Reduction: Enabling Efficient Data Analytics

    Get PDF
    In [1], we have explored the theoretical aspects of feature selection and evolutionary algorithms. In this chapter, we focus on optimization algorithms for enhancing data analytic process, i.e., we propose to explore applications of nature-inspired algorithms in data science. Feature selection optimization is a hybrid approach leveraging feature selection techniques and evolutionary algorithms process to optimize the selected features. Prior works solve this problem iteratively to converge to an optimal feature subset. Feature selection optimization is a non-specific domain approach. Data scientists mainly attempt to find an advanced way to analyze data n with high computational efficiency and low time complexity, leading to efficient data analytics. Thus, by increasing generated/measured/sensed data from various sources, analysis, manipulation and illustration of data grow exponentially. Due to the large scale data sets, Curse of dimensionality (CoD) is one of the NP-hard problems in data science. Hence, several efforts have been focused on leveraging evolutionary algorithms (EAs) to address the complex issues in large scale data analytics problems. Dimension reduction, together with EAs, lends itself to solve CoD and solve complex problems, in terms of time complexity, efficiently. In this chapter, we first provide a brief overview of previous studies that focused on solving CoD using feature extraction optimization process. We then discuss practical examples of research studies are successfully tackled some application domains, such as image processing, sentiment analysis, network traffics / anomalies analysis, credit score analysis and other benchmark functions/data sets analysis

    Ant Colony Optimization Based Subset Feature Selection in Speech Processing: Constructing Graphs with Degree Sequences

    Get PDF
    Feature selection or the process of selecting the most discriminating feature subset is an essential practice in speech processing that significantly affects the performance of classification. However, the volume of features that presents in speech processing makes the feature selection perplexing. Moreover, finding the optimal feature subset is a NP-hard problem (2n). Thus, a good searching strategy is required to avoid evaluating large number of combinations in the whole feature subsets. As a result, in recent years, many heuristic based search algorithms are developed to address this NP-hard problem. One of the several meta heuristic algorithms that is applied in many application domains to solve feature selection problem is Ant Colony Optimization (ACO) based algorithms.  ACO based algorithms are nature-inspired from the foraging behavior of actual ants. The success of an ACO based feature selection algorithm depends on the choice of the construction graph with respect to runtime behavior. While most ACO based feature selection algorithms use fully connected graphs, this paper proposes ACO based algorithm that uses graphs with prescribed degree sequences. In this method, the degree of the graph representing the search space will be predicted and the construction graph that satisfies the predicted degree will be generated. This research direction on graph representation for ACO algorithms may offer possibilities to reduce computation complexity from O(n2) to O(nm) in which m is the number of edges. This paper outlines some popular optimization based feature selection algorithms in the field of speech processing applications and overviewed ACO algorithm and its main variants. In addition to that, ACO based feature selection is explained and its application in various speech processing tasks is reviewed. Finally, a degree based graph construction for ACO algorithms is proposed

    A Survey on Soft Subspace Clustering

    Full text link
    Subspace clustering (SC) is a promising clustering technology to identify clusters based on their associations with subspaces in high dimensional spaces. SC can be classified into hard subspace clustering (HSC) and soft subspace clustering (SSC). While HSC algorithms have been extensively studied and well accepted by the scientific community, SSC algorithms are relatively new but gaining more attention in recent years due to better adaptability. In the paper, a comprehensive survey on existing SSC algorithms and the recent development are presented. The SSC algorithms are classified systematically into three main categories, namely, conventional SSC (CSSC), independent SSC (ISSC) and extended SSC (XSSC). The characteristics of these algorithms are highlighted and the potential future development of SSC is also discussed.Comment: This paper has been published in Information Sciences Journal in 201

    Evolutionary Computation, Optimization and Learning Algorithms for Data Science

    Get PDF
    A large number of engineering, science and computational problems have yet to be solved in a computationally efficient way. One of the emerging challenges is how evolving technologies grow towards autonomy and intelligent decision making. This leads to collection of large amounts of data from various sensing and measurement technologies, e.g., cameras, smart phones, health sensors, smart electricity meters, and environment sensors. Hence, it is imperative to develop efficient algorithms for generation, analysis, classification, and illustration of data. Meanwhile, data is structured purposefully through different representations, such as large-scale networks and graphs. We focus on data science as a crucial area, specifically focusing on a curse of dimensionality (CoD) which is due to the large amount of generated/sensed/collected data. This motivates researchers to think about optimization and to apply nature-inspired algorithms, such as evolutionary algorithms (EAs) to solve optimization problems. Although these algorithms look un-deterministic, they are robust enough to reach an optimal solution. Researchers do not adopt evolutionary algorithms unless they face a problem which is suffering from placement in local optimal solution, rather than global optimal solution. In this chapter, we first develop a clear and formal definition of the CoD problem, next we focus on feature extraction techniques and categories, then we provide a general overview of meta-heuristic algorithms, its terminology, and desirable properties of evolutionary algorithms

    Enhancing Feature Selection Accuracy using Butterfly and Lion Optimization Algorithm with Specific Reference to Psychiatric Disorder Detection & Diagnosis

    Get PDF
    As the complexity of medical computing increases the use of intelligent methods based on methods of soft computing also increases. During current decade this intelligent computing involves various meta-heuristic algorithms for Optimization. Many new meta-heuristic algorithms are proposed in last few years. The dimension of this data has also wide. Feature selection processes play an important role in these types of wide data. In intelligent computation feature selection is important phase after the pre-processing phase. The success of any model depends on how better optimization algorithms is used. Sometime single optimization algorithms are not enough in order to produce better result. In this paper meta-heuristic algorithm like butterfly optimization algorithm and enhanced lion optimization algorithm are used to show better accuracy in feature selection. The study focuses on nature based integrated meta-heuristic algorithm like Butterfly Optimization and lion-based optimization. Also, in this paper various other Optimization algorithms are analyzed. The study shows how integrated methods are useful to enhance the accuracy of any computing model to solve Complex problems. Here experimental result has shown by proposing and hybrid model for two major psychiatric disorders one is known as autism spectrum and second one is Parkinson's disease

    A Tent L\'evy Flying Sparrow Search Algorithm for Feature Selection: A COVID-19 Case Study

    Full text link
    The "Curse of Dimensionality" induced by the rapid development of information science, might have a negative impact when dealing with big datasets. In this paper, we propose a variant of the sparrow search algorithm (SSA), called Tent L\'evy flying sparrow search algorithm (TFSSA), and use it to select the best subset of features in the packing pattern for classification purposes. SSA is a recently proposed algorithm that has not been systematically applied to feature selection problems. After verification by the CEC2020 benchmark function, TFSSA is used to select the best feature combination to maximize classification accuracy and minimize the number of selected features. The proposed TFSSA is compared with nine algorithms in the literature. Nine evaluation metrics are used to properly evaluate and compare the performance of these algorithms on twenty-one datasets from the UCI repository. Furthermore, the approach is applied to the coronavirus disease (COVID-19) dataset, yielding the best average classification accuracy and the average number of feature selections, respectively, of 93.47% and 2.1. Experimental results confirm the advantages of the proposed algorithm in improving classification accuracy and reducing the number of selected features compared to other wrapper-based algorithms
    corecore