214 research outputs found

    HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images

    Get PDF
    Recently, a novel virus called COVID-19 has pervasive worldwide, starting from China and moving to all the world to eliminate a lot of persons. Many attempts have been experimented to identify the infection with COVID-19. The X-ray images were one of the attempts to detect the influence of COVID-19 on the infected persons from involving those experiments. According to the X-ray analysis, bilateral pulmonary parenchymal ground-glass and consolidative pulmonary opacities can be caused by COVID-19 — sometimes with a rounded morphology and a peripheral lung distribution. But unfortunately, the specification or if the person infected with COVID-19 or not is so hard under the X-ray images. X-ray images could be classified using the machine learning techniques to specify if the person infected severely, mild, or not infected. To improve the classification accuracy of the machine learning, the region of interest within the image that contains the features of COVID-19 must be extracted. This problem is called the image segmentation problem (ISP). Many techniques have been proposed to overcome ISP. The most commonly used technique due to its simplicity, speed, and accuracy are threshold-based segmentation. This paper proposes a new hybrid approach based on the thresholding technique to overcome ISP for COVID-19 chest X-ray images by integrating a novel meta-heuristic algorithm known as a slime mold algorithm (SMA) with the whale optimization algorithm to maximize the Kapur's entropy. The performance of integrated SMA has been evaluated on 12 chest X-ray images with threshold levels up to 30 and compared with five algorithms: Lshade algorithm, whale optimization algorithm (WOA), FireFly algorithm (FFA), Harris-hawks algorithm (HHA), salp swarm algorithms (SSA), and the standard SMA. The experimental results demonstrate that the proposed algorithm outperforms SMA under Kapur's entropy for all the metrics used and the standard SMA could perform better than the other algorithms in the comparison under all the metrics

    Improvements on the bees algorithm for continuous optimisation problems

    Get PDF
    This work focuses on the improvements of the Bees Algorithm in order to enhance the algorithm’s performance especially in terms of convergence rate. For the first enhancement, a pseudo-gradient Bees Algorithm (PG-BA) compares the fitness as well as the position of previous and current bees so that the best bees in each patch are appropriately guided towards a better search direction after each consecutive cycle. This method eliminates the need to differentiate the objective function which is unlike the typical gradient search method. The improved algorithm is subjected to several numerical benchmark test functions as well as the training of neural network. The results from the experiments are then compared to the standard variant of the Bees Algorithm and other swarm intelligence procedures. The data analysis generally confirmed that the PG-BA is effective at speeding up the convergence time to optimum. Next, an approach to avoid the formation of overlapping patches is proposed. The Patch Overlap Avoidance Bees Algorithm (POA-BA) is designed to avoid redundancy in search area especially if the site is deemed unprofitable. This method is quite similar to Tabu Search (TS) with the POA-BA forbids the exact exploitation of previously visited solutions along with their corresponding neighbourhood. Patches are not allowed to intersect not just in the next generation but also in the current cycle. This reduces the number of patches materialise in the same peak (maximisation) or valley (minimisation) which ensures a thorough search of the problem landscape as bees are distributed around the scaled down area. The same benchmark problems as PG-BA were applied against this modified strategy to a reasonable success. Finally, the Bees Algorithm is revised to have the capability of locating all of the global optimum as well as the substantial local peaks in a single run. These multi-solutions of comparable fitness offers some alternatives for the decision makers to choose from. The patches are formed only if the bees are the fittest from different peaks by using a hill-valley mechanism in this so called Extended Bees Algorithm (EBA). This permits the maintenance of diversified solutions throughout the search process in addition to minimising the chances of getting trap. This version is proven beneficial when tested with numerous multimodal optimisation problems

    Swarm intelligence: novel tools for optimization, feature extraction, and multi-agent system modeling

    Get PDF
    Abstract Animal swarms in nature are able to adapt to dynamic changes in their envi-ronment, and through cooperation they can solve problems that are crucial for their survival. Only by means of local interactions with other members of the swarm and with the environment, they can achieve a common goal more efficiently than it would be done by a single individual. This problem-solving behavior that results from the multiplicity of such interactions is referred to as Swarm Intelligence. The mathematical models of swarming behavior in nature were initially proposed to solve optimization problems. Nevertheless, this decentralized approach can be a valuable tool for a variety of applications, where emerging global patterns represent a solution to the task at hand. Methods for the solution of difficult computational problems based on Swarm Intelligence have been experimentally demonstrated and reported in the literature. However, a general framework that would facilitate their design does not exist yet. In this dissertation, a new general design methodology for Swarm Intelligence tools is proposed. By defining a discrete space in which the members of the swarm can move, and by modifying the rules of local interactions and setting the adequate objective function for solutions evaluation, the proposed methodology is tested in various domains. The dissertation presents a set of case studies, and focuses on two general approaches. One approach is to apply Swarm Intelligence as a tool for optimization and feature extraction, and the other approach is to model multi-agent systems such that they resemble swarms of animals in nature providing them with the ability to autonomously perform a task at hand. Artificial swarms are designed to be autonomous, scalable, robust, and adaptive to the changes in their environment. In this work, the methods that exploit one or more of these features are presented. First, the proposed methodology is validated in a real-world scenario seen as a combinatorial optimization problem. Then a set of novel tools for feature extraction, more precisely the adaptive edge detection and the broken-edge linking in digital images is proposed. A novel data clustering algorithm is also proposed and applied to image segmentation. Finally, a scalable algorithm based on the proposed methodology is developed for distributed task allocation in multi-agent systems, and applied to a swarm of robots. The newly proposed general methodology provides a guideline for future developers of the Swarm Intelligence tools. Los enjambres de animales en la naturaleza son capaces de adaptarse a cambios dinamicos en su entorno y, por medio de la cooperación, pueden resolver problemas ´ cruciales para su supervivencia. Unicamente por medio de interacciones locales con otros miembros del enjambre y con el entorno, pueden lograr un objetivo común de forma más eficiente que lo haría un solo individuo. Este comportamiento problema-resolutivo que es resultado de la multiplicidad de interacciones se denomina Inteligencia de Enjambre. Los modelos matemáticos de comportamiento de enjambres en entornos naturales fueron propuestos inicialmente para resolver problemas de optimización. Sin embargo, esta aproximación descentralizada puede ser una herramienta valiosa en una variedad de aplicaciones donde patrones globales emergentes representan una solución de las tareas actuales. Aunque en la literatura se muestra la utilidad de los métodos de Inteligencia de Enjambre, no existe un entorno de trabajo que facilite su diseño. En esta memoria de tesis proponemos una nueva metodologia general de diseño para herramientas de Inteligencia de Enjambre. Desarrollamos herramientas noveles que representan ejem-plos ilustrativos de su implementación. Probamos la metodología propuesta en varios dominios definiendo un espacio discreto en el que los miembros del enjambre pueden moverse, modificando las reglas de las interacciones locales y fijando la función objetivo adecuada para evaluar las soluciones. La memoria de tesis presenta un conjunto de casos de estudio y se centra en dos aproximaciones generales. Una aproximación es aplicar Inteligencia de Enjambre como herramienta de optimización y extracción de características mientras que la otra es modelar sistemas multi-agente de tal manera que se asemejen a enjambres de animales en la naturaleza a los que se les confiere la habilidad de ejecutar autónomamente la tarea. Los enjambres artificiales están diseñados para ser autónomos, escalables, robustos y adaptables a los cambios en su entorno. En este trabajo, presentamos métodos que explotan una o más de estas características. Primero, validamos la metodología propuesta en un escenario del mundo real visto como un problema de optimización combinatoria. Después, proponemos un conjunto de herramientas noveles para ex-tracción de características, en concreto la detección adaptativa de bordes y el enlazado de bordes rotos en imágenes digitales, y el agrupamiento de datos para segmentación de imágenes. Finalmente, proponemos un algoritmo escalable para la asignación distribuida de tareas en sistemas multi-agente aplicada a enjambres de robots. La metodología general recién propuesta ofrece una guía para futuros desarrolladores deherramientas de Inteligencia de Enjambre

    Krisis Demokrasi di Malaysia Kes Parti Islam SeMalaysia (PAS) dalam pilihan Raya 1969

    Get PDF
    Dalam sejarah Pilihan Raya, Malaysia memiliki pengalaman gelap iaitu terjadinya krisis demokrasi dalam pilihan raya 1969. Dalam pilihan raya 1969 ini bukan saja terjadi persaingan antara parti-parti politik yang ikut dalam pilihan raya tetapi juga terjadi rusuhan etnik yang dikenali dengan kerusuhan Mei 1969 yang membabitkan etnik Melayu dan China

    Performance Enhancement Of Artificial Bee Colony Optimization Algorithm

    Get PDF
    Artificial Bee Colony (ABC) algorithm is a recently proposed bio-inspired optimization algorithm, simulating foraging phenomenon of honeybees. Although literature works have revealed the superiority of ABC algorithm on numerous benchmark functions and real-world applications, the standard ABC and its variants have been found to suffer from slow convergence, prone to local-optima traps, poor exploitation and poor capability to replace exhaustive potential-solutions. To overcome the problems, this research work has proposed few modified and new ABC variants; Gbest Influenced-Random ABC (GRABC) algorithm systematically exploits two different mutation equations for appropriate exploration and exploitation of search-space, Multiple Gbest-guided ABC (MBABC) algorithm enhances the capability of locating global optimum by exploiting so-far-found multiple best regions of a search-space, Enhanced ABC (EABC) algorithm speeds up exploration for optimal-solutions based on the best so-far-found region of a search-space and Enhanced Probability-Selection ABC (EPS-ABC) algorithm, a modified version of the Probability-Selection ABC algorithm, simultaneously capitalizes on three different mutation equations for determining the global-optimum. All the proposed ABC variants have been incorporated with a proposed intelligent scout-bee scheme whilst MBABC and EABC employ a novel elite-update scheme

    IST Austria Thesis

    Get PDF
    Social insect colonies tend to have numerous members which function together like a single organism in such harmony that the term ``super-organism'' is often used. In this analogy the reproductive caste is analogous to the primordial germ cells of a metazoan, while the sterile worker caste corresponds to somatic cells. The worker castes, like tissues, are in charge of all functions of a living being, besides reproduction. The establishment of new super-organismal units (i.e. new colonies) is accomplished by the co-dependent castes. The term oftentimes goes beyond a metaphor. We invoke it when we speak about the metabolic rate, thermoregulation, nutrient regulation and gas exchange of a social insect colony. Furthermore, we assert that the super-organism has an immune system, and benefits from ``social immunity''. Social immunity was first summoned by evolutionary biologists to resolve the apparent discrepancy between the expected high frequency of disease outbreak amongst numerous, closely related tightly-interacting hosts, living in stable and microbially-rich environments, against the exceptionally scarce epidemic accounts in natural populations. Social immunity comprises a multi-layer assembly of behaviours which have evolved to effectively keep the pathogenic enemies of a colony at bay. The field of social immunity has drawn interest, as it becomes increasingly urgent to stop the collapse of pollinator species and curb the growth of invasive pests. In the past decade, several mechanisms of social immune responses have been dissected, but many more questions remain open. I present my work in two experimental chapters. In the first, I use invasive garden ants (*Lasius neglectus*) to study how pathogen load and its distribution among nestmates affect the grooming response of the group. Any given group of ants will carry out the same total grooming work, but will direct their grooming effort towards individuals carrying a relatively higher spore load. Contrary to expectation, the highest risk of transmission does not stem from grooming highly contaminated ants, but instead, we suggest that the grooming response likely minimizes spore loss to the environment, reducing contamination from inadvertent pickup from the substrate. The second is a comparative developmental approach. I follow black garden ant queens (*Lasius niger*) and their colonies from mating flight, through hibernation for a year. Colonies which grow fast from the start, have a lower chance of survival through hibernation, and those which survive grow at a lower pace later. This is true for colonies of naive and challenged queens. Early pathogen exposure of the queens changes colony dynamics in an unexpected way: colonies from exposed queens are more likely to grow slowly and recover in numbers only after they survive hibernation. In addition to the two experimental chapters, this thesis includes a co-authored published review on organisational immunity, where we enlist the experimental evidence and theoretical framework on which this hypothesis is built, identify the caveats and underline how the field is ripe to overcome them. In a final chapter, I describe my part in two collaborative efforts, one to develop an image-based tracker, and the second to develop a classifier for ant behaviour

    Algoritmos baseados em inteligência de enxames aplicados à multilimiarização de imagens

    Get PDF
    Orientador: Prof. Dr. Leandro dos Santos CoelhoDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Engenharia Elétrica. Defesa : Curitiba, 20/08/2018Inclui referências: p.117-122Área de concentração: Sistemas EletrônicosResumo: O processamento de imagens é uma área que cresce à medida que as tecnologias de geração e armazenamento de informações digitais evoluem. Uma das etapas iniciais do processamento de imagem é a segmentação, onde a multilimiarização é uma das técnicas de segmentação mais simples. Um focorelevante de pesquisa nesta área é o projeto de abordagens visando a separação de diferentes objetos na imagem em grupos, por meio de limiares, para facilitar assim a interpretação da informação contida na imagem. Uma imagem perde informação, ou entropia, quando é limiarizada. A equação de limiarização multiníveis de Kapur calcula, a partir dos limiares escolhidos, qual a quantidade de informação que uma imagem apresentará após a limiarização. Assim, pela maximização da equação de multimiliarização de Kapur, é possível determinar os limiares que retornam uma imagem com valor maior de entropia. Quanto maior a quantidade de limiares, maior a dificuldade para encontrar a melhor solução, devido ao aumento significativo da quantidade de possíveis soluções. O objetivo desta dissertação é de apresentar um estudo comparativodecinco algoritmos de otimização (meta-heurísticas de otimização)da inteligência de enxame, incluindo Otimização por Enxame de Partículas (PSO), Otimização por Enxame de Partículas Darwiniano (DPSO), Otimização por Enxame de Partículas Darwiniano de Ordem Fracionária (FO-DPSO), Otimizador baseado no comportamento dos Lobos-cinza (GWO) e Otimizador inspirado no comportamento da Formiga-leão (ALO), de forma a avaliarqual deles obtém a melhor solução e convergência em termos da função objetivo relacionada a entropia da imagem. Uma contribuição desta dissertação é a aplicação de diferentes meta-heurísticas de otimização ao problema de multilimiarização de imagens, assim como o estudo do impacto das suas variáveis de controle (hiperparâmetros) para o problema em questão.Nesta dissertação são apresentados resultados paraquatro imagens diferentes, sendo duas imagens registradas por satélite (Rio Hunza e Yellowstone) e outras duas imagens teste (benchmark) obtidas do Centro de Engenharia Elétrica e Ciência da Computação do MIT (Massachussetts Institute of Technology). Os resultados são comparados considerando a média e o desvio padrão da entropia de cada imagem resultante. Com base nos resultados obtidos conclui-se que o algoritmo mais indicado para o problema de multilimiarização de imagens dos avaliados é o GWO, pelo seu desempenho superior em relação aos outros algoritmos e pelas entropias das imagens resultantes serem satisfatórias. Palavras-chave: Segmentação de imagens. Multilimiarização. Inteligência de enxames. Otimização por enxame de partículas. Otimizador dos lobos-cinza. Otimizador formiga-leão.Abstract: Image processing is a field that grows as digital information storage and generation technologies evolution. One of the initial stages of image processing is segmentation procedure, where the multi level thresholding is one of the simplest segmentation approaches. A relevant research objective in this field is the design of approaches aimed at separating different objects in the image into groups, through thresholds, to facilitate the interpretation of the information contained in the image. An image loses information, or entropy, when it is thresholded. The Kapur multilevel thresholding equation calculates, from the chosen thresholds, how much information an image will present after the thresholding. Thus, by the maximization of the Kapur multilevel limiarization equation, it is possible to determine the thresholds that return an image with a larger value of entropy. The higher the amount of thresholds, the greater the difficulty in finding the best solution, due to the significant increase in the quantity of possible solutions. The objective of this dissertation is to present a comparative study between fiveoptimization metaheuristics of the swarm intelligence field, including Particle Swarm Optimization (PSO), Darwinian Particle Swarm Optimization (DPSO), Fractional Order Darwinian Particle Swarm Optimization (FO-DPSO), Grey Wolf Optimizer (GWO) and the Ant lion behavioral optimizer (ALO), in order to identify which one gets the best solution and convergence in terms of the objective function and the entropy of the image. A contribution of this dissertation is the application of different optimization metaheuristics to the problem of multilimizing of images, as well as the study of the impact of its control variables (hyperparameters) on the problem in question. Experiments are conducted with four images, two images being recorded by satellite (Hunza River and Yellowstone) and two other test(benchmark) images obtained from MIT's (Massachussetts Institute of Technology) Electrical Engineering and Computer Science Center. The results are compared considering the mean and standard deviation values of each resulting image entropy.Based on the results obtained it is concluded that the most suitable algorithm for the problem of multilevel thresholding of images is the GWO, for its superior performance in relation to the other tested algorithms and satisfactory entropies of the resulting images. Key-words: Image segmentation. Multilevel thresholding. Kapur's entropy. Swarm intelligence. Particle swarm optimization. Grey wolf optimizer. Ant lion optimizer

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices
    corecore