1,540 research outputs found

    Spatial information of fuzzy clustering based mean best artificial bee colony algorithm for phantom brain image segmentation

    Get PDF
    Fuzzy c-means algorithm (FCM) is among the most commonly used in the medical image segmentation process. Nevertheless, the traditional FCM clustering approach has been several weaknesses such as noise sensitivity and stuck in local optimum, due to FCM hasn’t able to consider the information of contextual. To solve FCM problems, this paper presented spatial information of fuzzy clustering-based mean best artificial bee colony algorithm, which is called SFCM-MeanABC. This proposed approach is used contextual information in the spatial fuzzy clustering algorithm to reduce sensitivity to noise and its used MeanABC capability of balancing between exploration and exploitation that is explore the positive and negative directions in search space to find the best solutions, which leads to avoiding stuck in a local optimum. The experiments are carried out on two kinds of brain images the Phantom MRI brain image with a different level of noise and simulated image. The performance of the SFCM-MeanABC approach shows promising results compared with SFCM-ABC and other stats of the arts

    Effective image clustering based on human mental search

    Get PDF
    Image segmentation is one of the fundamental techniques in image analysis. One group of segmentation techniques is based on clustering principles, where association of image pixels is based on a similarity criterion. Conventional clustering algorithms, such as k-means, can be used for this purpose but have several drawbacks including dependence on initialisation conditions and a higher likelihood of converging to local rather than global optima. In this paper, we propose a clustering-based image segmentation method that is based on the human mental search (HMS) algorithm. HMS is a recent metaheuristic algorithm based on the manner of searching in the space of online auctions. In HMS, each candidate solution is called a bid, and the algorithm comprises three major stages: mental search, which explores the vicinity of a solution using Levy flight to find better solutions; grouping which places a set of candidate solutions into a group using a clustering algorithm; and moving bids toward promising solution areas. In our image clustering application, bids encode the cluster centres and we evaluate three different objective functions. In an extensive set of experiments, we compare the efficacy of our proposed approach with several state-of-the-art metaheuristic algorithms including a genetic algorithm, differential evolution, particle swarm optimisation, artificial bee colony algorithm, and harmony search. We assess the techniques based on a variety of metrics including the objective functions, a cluster validity index, as well as unsupervised and supervised image segmentation criteria. Moreover, we perform some tests in higher dimensions, and conduct a statistical analysis to compare our proposed method to its competitors. The obtained results clearly show that the proposed algorithm represents a highly effective approach to image clustering that outperforms other state-of-the-art techniques

    Optimal k-means clustering using artificial bee colony algorithm with variable food sources length

    Get PDF
    Clustering is a robust machine learning task that involves dividing data points into a set of groups with similar traits. One of the widely used methods in this regard is the k-means clustering algorithm due to its simplicity and effectiveness. However, this algorithm suffers from the problem of predicting the number and coordinates of the initial clustering centers. In this paper, a method based on the first artificial bee colony algorithm with variable-length individuals is proposed to overcome the limitations of the k-means algorithm. Therefore, the proposed technique will automatically predict the clusters number (the value of k) and determine the most suitable coordinates for the initial centers of clustering instead of manually presetting them. The results were encouraging compared with the traditional k-means algorithm on three real-life clustering datasets. The proposed algorithm outperforms the traditional k-means algorithm for all tested real-life datasets

    Chemical and biological reactions of solidification of peat using ordinary portland cement (OPC) and coal ashes

    Get PDF
    Construction over peat area have often posed a challenge to geotechnical engineers. After decades of study on peat stabilisation techniques, there are still no absolute formulation or guideline that have been established to handle this issue. Some researchers have proposed solidification of peat but a few researchers have also discovered that solidified peat seemed to decrease its strength after a certain period of time. Therefore, understanding the chemical and biological reaction behind the peat solidification is vital to understand the limitation of this treatment technique. In this study, all three types of peat; fabric, hemic and sapric were mixed using Mixing 1 and Mixing 2 formulation which consisted of ordinary Portland cement, fly ash and bottom ash at various ratio. The mixtures of peat-binder-filler were subjected to the unconfined compressive strength (UCS) test, bacterial count test and chemical elemental analysis by using XRF, XRD, FTIR and EDS. Two pattern of strength over curing period were observed. Mixing 1 samples showed a steadily increase in strength over curing period until Day 56 while Mixing 2 showed a decrease in strength pattern at Day 28 and Day 56. Samples which increase in strength steadily have less bacterial count and enzymatic activity with increase quantity of crystallites. Samples with lower strength recorded increase in bacterial count and enzymatic activity with less crystallites. Analysis using XRD showed that pargasite (NaCa2[Mg4Al](Si6Al2)O22(OH)2) was formed in the higher strength samples while in the lower strength samples, pargasite was predicted to be converted into monosodium phosphate and Mg(OH)2 as bacterial consortium was re-activated. The Michaelis�Menten coefficient, Km of the bio-chemical reaction in solidified peat was calculated as 303.60. This showed that reaction which happened during solidification work was inefficient. The kinetics for crystallite formation with enzymatic effect is modelled as 135.42 (1/[S] + 0.44605) which means, when pargasite formed is lower, the amount of enzyme secretes is higher

    Optimizing K-Means Initial Number of Cluster Based Heuristic Approach: Literature Review Analysis Perspective

    Get PDF
    One popular clustering technique - the K-means widely use in educational scope to clustering and mapping document, data, and user performance in skill. K-means clustering is one of the classical and most widely used clustering algorithms shows its efficiency in many traditional applications its defect appears obviously when the data set to become much more complicated. Based on some research on K-means algorithm shows that Number of a cluster of K-means cannot easily be specified in much real-world application, several algorithms requiring the number of cluster as a parameter cannot be effectively employed. The aim of this paper describes the perspective K-means problems underlying research. Literature analysis of previous studies suggesting that selection of the number of clusters randomly cause problems such as suitable producing globular cluster, less efficient if as the number of cluster grow K-means clustering becomes untenable. From those literature reviews, the heuristic optimization will be approached to solve an initial number of cluster randomly

    An Evolutionary Optimization Algorithm for Automated Classical Machine Learning

    Get PDF
    Machine learning is an evolving branch of computational algorithms that allow computers to learn from experiences, make predictions, and solve different problems without being explicitly programmed. However, building a useful machine learning model is a challenging process, requiring human expertise to perform various proper tasks and ensure that the machine learning\u27s primary objective --determining the best and most predictive model-- is achieved. These tasks include pre-processing, feature selection, and model selection. Many machine learning models developed by experts are designed manually and by trial and error. In other words, even experts need the time and resources to create good predictive machine learning models. The idea of automated machine learning (AutoML) is to automate a machine learning pipeline to release the burden of substantial development costs and manual processes. The algorithms leveraged in these systems have different hyper-parameters. On the other hand, different input datasets have various features. In both cases, the final performance of the model is closely related to the final selected configuration of features and hyper-parameters. That is why they are considered as crucial tasks in the AutoML. The challenges regarding the computationally expensive nature of tuning hyper-parameters and optimally selecting features create significant opportunities for filling the research gaps in the AutoML field. This dissertation explores how to select the features and tune the hyper-parameters of conventional machine learning algorithms efficiently and automatically. To address the challenges in the AutoML area, novel algorithms for hyper-parameter tuning and feature selection are proposed. The hyper-parameter tuning algorithm aims to provide the optimal set of hyper-parameters in three conventional machine learning models (Random Forest, XGBoost and Support Vector Machine) to obtain best scores regarding performance. On the other hand, the feature selection algorithm looks for the optimal subset of features to achieve the highest performance. Afterward, a hybrid framework is designed for both hyper-parameter tuning and feature selection. The proposed framework can discover close to the optimal configuration of features and hyper-parameters. The proposed framework includes the following components: (1) an automatic feature selection component based on artificial bee colony algorithms and machine learning training, and (2) an automatic hyper-parameter tuning component based on artificial bee colony algorithms and machine learning training for faster training and convergence of the learning models. The whole framework has been evaluated using four real-world datasets in different applications. This framework is an attempt to alleviate the challenges of hyper-parameter tuning and feature selection by using efficient algorithms. However, distributed processing, distributed learning, parallel computing, and other big data solutions are not taken into consideration in this framework

    Extending the Lifetime of Wireless Sensor Networks Based on an Improved Multi-objective Artificial Bees Colony Algorithm

    Get PDF
    Reducing the sensors\u27 energy expenditure to prolong the network lifespan as long as possible remains a fundamental problem in the field of wireless networks. Particularly in applications with inaccessible environments, which impose crucial constraints on sensor replacement. It is, therefore, necessary to design adaptive routing protocols, taking into account the environmental constraints and the limited energy of sensors. To have an energy-efficient routing protocol, a new cluster heads’ (CHs) selection strategy using a modified multi-objective artificial bees colony (MOABC) optimization is defined. The modified MOABC is based on the roulette wheel selection over non-dominated solutions of the repository (hyper-cubes) in which a rank is assigned to each hypercube based on its density in dominated solutions of the current iteration and then a random food source is elected by roulette from the densest hypercube. The proposed work aims to find the optimal set of CHs based on their residual energies to ensure an optimal balance between the nodes\u27 energy consumption. The achieved results proved that the proposed MOABC-based protocol considerably outperforms recent studies and well-known energy-efficient protocols, namely: LEACH, C-LEACH, SEP, TSEP, DEEC, DDEEC, and EDEEC in terms of energy efficiency, stability, and network lifespan extension
    • …
    corecore