14 research outputs found

    An experimental study of a fuzzy adaptive emperor penguin optimizer for global optimization problem

    Get PDF
    Emperor Penguin Optimizer (EPO) is a recently developed population-based meta-heuristic algorithm that simulates the huddling behavior of emperor penguins. Mixed results have been observed on the performance of EPO in solving general optimization problems. Within the EPO, two parameters need to be tuned (namely f and l ) to ensure a good balance between exploration (i.e., roaming unknown locations) and exploitation (i.e., manipulating the current known best). Since the search contour varies depending on the optimization problem, the tuning of f and l is problem-dependent, and there is no one-size-fits-all approach. To alleviate these problems, an adaptive mechanism can be introduced in EPO. This paper proposes a fuzzy adaptive variant of EPO, namely Fuzzy Adaptive Emperor Penguin Optimizer (FAEPO), to solve this problem. As the name suggests, FAEPO can adaptively tune the parameters f and l throughout the search based on three measures (i.e., quality, success rate, and diversity of the current search) via fuzzy decisions. A test suite of twelve optimization benchmark test functions and three global optimization problems (Team Formation Optimization - TFO, Low Autocorrelation Binary Sequence - LABS, and Modified Condition/Decision Coverage - MC/DC test case generation) were solved using the proposed algorithm. The respective solution results of the benchmark meta-heuristic algorithms were compared. The experimental results demonstrate that FAEPO significantly improved the performance of its predecessor (EPO) and gives superior performance against the competing meta-heuristic algorithms, including an improved variant of EPO (IEPO)

    Scientific research trends about metaheuristics in process optimization and case study using the desirability function

    Get PDF
    This study aimed to identify the research gaps in Metaheuristics, taking into account the publications entered in a database in 2015 and to present a case study of a company in the Sul Fluminense region using the Desirability function. To achieve this goal, applied research of exploratory nature and qualitative approach was carried out, as well as another of quantitative nature. As method and technical procedures were the bibliographical research, some literature review, and an adopted case study respectively. As a contribution of this research, the holistic view of opportunities to carry out new investigations on the theme in question is pointed out. It is noteworthy that the identified study gaps after the research were prioritized and discriminated, highlighting the importance of the viability of metaheuristic algorithms, as well as their benefits for process optimization

    An Improved Binary Grey-Wolf Optimizer with Simulated Annealing for Feature Selection

    Get PDF
    This paper proposes improvements to the binary grey-wolf optimizer (BGWO) to solve the feature selection (FS) problem associated with high data dimensionality, irrelevant, noisy, and redundant data that will then allow machine learning algorithms to attain better classification/clustering accuracy in less training time. We propose three variants of BGWO in addition to the standard variant, applying different transfer functions to tackle the FS problem. Because BGWO generates continuous values and FS needs discrete values, a number of V-shaped, S-shaped, and U-shaped transfer functions were investigated for incorporation with BGWO to convert their continuous values to binary. After investigation, we note that the performance of BGWO is affected by the selection of the transfer function. Then, in the first variant, we look to reduce the local minima problem by integrating an exploration capability to update the position of the grey wolf randomly within the search space with a certain probability; this variant was abbreviated as IBGWO. Consequently, a novel mutation strategy is proposed to select a number of the worst grey wolves in the population which are updated toward the best solution and randomly within the search space based on a certain probability to determine if the update is either toward the best or randomly. The number of the worst grey wolf selected by this strategy is linearly increased with the iteration. Finally, this strategy is combined with IBGWO to produce the second variant of BGWO that was abbreviated as LIBGWO. In the last variant, simulated annealing (SA) was integrated with LIBGWO to search around the best-so-far solution at the end of each iteration in order to identify better solutions. The performance of the proposed variants was validated on 32 datasets taken from the UCI repository and compared with six wrapper feature selection methods. The experiments show the superiority of the proposed improved variants in producing better classification accuracy than the other selected wrapper feature selection algorithms

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Remote sensing imagery segmentation: A hybrid approach

    Full text link
    In remote sensing imagery, segmentation techniques fail to encounter multiple regions of interest due to challenges such as dense features, low illumination, uncertainties, and noise. Consequently, exploiting vast and redundant information makes segmentation a difficult task. Existing multilevel thresholding techniques achieve low segmentation accuracy with high temporal difficulty due to the absence of spatial information. To mitigate this issue, this paper presents a new Rényi’s entropy and modified cuckoo search-based robust automatic multi-thresholding algorithm for remote sensing image analysis. In the proposed method, the modified cuckoo search algorithm is combined with Rényi’s entropy thresholding criteria to determine optimal thresholds. In the modified cuckoo search algorithm, the Lévy flight step size was modified to improve the convergence rate. An experimental analysis was conducted to validate the proposed method, both qualitatively and quantitatively against existing metaheuristic-based thresholding methods. To do this, the performance of the proposed method was intensively examined on high-dimensional remote sensing imageries. Moreover, numerical parameter analysis is presented to compare the segmented results against the gray-level co-occurrence matrix, Otsu energy curve, minimum cross entropy, and Rényi’s entropy-based thresholding. Experiments demonstrated that the proposed approach is effective and successful in attaining accurate segmentation with low time complexity

    Deep Learning and parallelization of Meta-heuristic Methods for IoT Cloud

    Get PDF
    Healthcare 4.0 is one of the Fourth Industrial Revolution’s outcomes that make a big revolution in the medical field. Healthcare 4.0 came with more facilities advantages that improved the average life expectancy and reduced population mortality. This paradigm depends on intelligent medical devices (wearable devices, sensors), which are supposed to generate a massive amount of data that need to be analyzed and treated with appropriate data-driven algorithms powered by Artificial Intelligence such as machine learning and deep learning (DL). However, one of the most significant limits of DL techniques is the long time required for the training process. Meanwhile, the realtime application of DL techniques, especially in sensitive domains such as healthcare, is still an open question that needs to be treated. On the other hand, meta-heuristic achieved good results in optimizing machine learning models. The Internet of Things (IoT) integrates billions of smart devices that can communicate with one another with minimal human intervention. IoT technologies are crucial in enhancing several real-life smart applications that can improve life quality. Cloud Computing has emerged as a key enabler for IoT applications because it provides scalable and on-demand, anytime, anywhere access to the computing resources. In this thesis, we are interested in improving the efficacity and performance of Computer-aided diagnosis systems in the medical field by decreasing the complexity of the model and increasing the quality of data. To accomplish this, three contributions have been proposed. First, we proposed a computer aid diagnosis system for neonatal seizures detection using metaheuristics and convolutional neural network (CNN) model to enhance the system’s performance by optimizing the CNN model. Secondly, we focused our interest on the covid-19 pandemic and proposed a computer-aided diagnosis system for its detection. In this contribution, we investigate Marine Predator Algorithm to optimize the configuration of the CNN model that will improve the system’s performance. In the third contribution, we aimed to improve the performance of the computer aid diagnosis system for covid-19. This contribution aims to discover the power of optimizing the data using different AI methods such as Principal Component Analysis (PCA), Discrete wavelet transform (DWT), and Teager Kaiser Energy Operator (TKEO). The proposed methods and the obtained results were validated with comparative studies using benchmark and public medical data

    A Colour Wheel to Rule them All: Analysing Colour & Geometry in Medical Microscopy

    Get PDF
    Personalized medicine is a rapidly growing field in healthcare that aims to customize medical treatments and preventive measures based on each patient’s unique characteristics, such as their genes, environment, and lifestyle factors. This approach acknowledges that people with the same medical condition may respond differently to therapies and seeks to optimize patient outcomes while minimizing the risk of adverse effects. To achieve these goals, personalized medicine relies on advanced technologies, such as genomics, proteomics, metabolomics, and medical imaging. Digital histopathology, a crucial aspect of medical imaging, provides clinicians with valuable insights into tissue structure and function at the cellular and molecular levels. By analyzing small tissue samples obtained through minimally invasive techniques, such as biopsy or aspirate, doctors can gather extensive data to evaluate potential diagnoses and clinical decisions. However, digital analysis of histology images presents unique challenges, including the loss of 3D information and stain variability, which is further complicated by sample variability. Limited access to data exacerbates these challenges, making it difficult to develop accurate computational models for research and clinical use in digital histology. Deep learning (DL) algorithms have shown significant potential for improving the accuracy of Computer-Aided Diagnosis (CAD) and personalized treatment models, particularly in medical microscopy. However, factors such as limited generability, lack of interpretability, and bias sometimes hinder their clinical impact. Furthermore, the inherent variability of histology images complicates the development of robust DL methods. Thus, this thesis focuses on developing new tools to address these issues. Our essential objective is to create transparent, accessible, and efficient methods based on classical principles from various disciplines, including histology, medical imaging, mathematics, and art, to tackle microscopy image registration and colour analysis successfully. These methods can contribute significantly to the advancement of personalized medicine, particularly in studying the tumour microenvironment for diagnosis and therapy research. First, we introduce a novel automatic method for colour analysis and non-rigid histology registration, enabling the study of heterogeneity morphology in tumour biopsies. This method achieves accurate tissue cut registration, drastically reducing landmark distance and excellent border overlap. Second, we introduce ABANICCO, a novel colour analysis method that combines geometric analysis, colour theory, fuzzy colour spaces, and multi-label systems for automatically classifying pixels into a set of conventional colour categories. ABANICCO outperforms benchmark methods in accuracy and simplicity. It is computationally straightforward, making it useful in scenarios involving changing objects, limited data, unclear boundaries, or when users lack prior knowledge of the image or colour theory. Moreover, results can be modified to match each particular task. Third, we apply the acquired knowledge to create a novel pipeline of rigid histology registration and ABANICCO colour analysis for the in-depth study of triple-negative breast cancer biopsies. The resulting heterogeneity map and tumour score provide valuable insights into the composition and behaviour of the tumour, informing clinical decision-making and guiding treatment strategies. Finally, we consolidate the developed ideas into an efficient pipeline for tissue reconstruction and multi-modality data integration on Tuberculosis infection data. This enables accurate element distribution analysis to understand better interactions between bacteria, host cells, and the immune system during the course of infection. The methods proposed in this thesis represent a transparent approach to computational pathology, addressing the needs of medical microscopy registration and colour analysis while bridging the gap between clinical practice and computational research. Moreover, our contributions can help develop and train better, more robust DL methods.En una época en la que la medicina personalizada está revolucionando la asistencia sanitaria, cada vez es más importante adaptar los tratamientos y las medidas preventivas a la composición genética, el entorno y el estilo de vida de cada paciente. Mediante el empleo de tecnologías avanzadas, como la genómica, la proteómica, la metabolómica y la imagen médica, la medicina personalizada se esfuerza por racionalizar el tratamiento para mejorar los resultados y reducir los efectos secundarios. La microscopía médica, un aspecto crucial de la medicina personalizada, permite a los médicos recopilar y analizar grandes cantidades de datos a partir de pequeñas muestras de tejido. Esto es especialmente relevante en oncología, donde las terapias contra el cáncer se pueden optimizar en función de la apariencia tisular específica de cada tumor. La patología computacional, un subcampo de la visión por ordenador, trata de crear algoritmos para el análisis digital de biopsias. Sin embargo, antes de que un ordenador pueda analizar imágenes de microscopía médica, hay que seguir varios pasos para conseguir las imágenes de las muestras. La primera etapa consiste en recoger y preparar una muestra de tejido del paciente. Para que esta pueda observarse fácilmente al microscopio, se corta en secciones ultrafinas. Sin embargo, este delicado procedimiento no está exento de dificultades. Los frágiles tejidos pueden distorsionarse, desgarrarse o agujerearse, poniendo en peligro la integridad general de la muestra. Una vez que el tejido está debidamente preparado, suele tratarse con tintes de colores característicos. Estos tintes acentúan diferentes tipos de células y tejidos con colores específicos, lo que facilita a los profesionales médicos la identificación de características particulares. Sin embargo, esta mejora en visualización tiene un alto coste. En ocasiones, los tintes pueden dificultar el análisis informático de las imágenes al mezclarse de forma inadecuada, traspasarse al fondo o alterar el contraste entre los distintos elementos. El último paso del proceso consiste en digitalizar la muestra. Se toman imágenes de alta resolución del tejido con distintos aumentos, lo que permite su análisis por ordenador. Esta etapa también tiene sus obstáculos. Factores como una calibración incorrecta de la cámara o unas condiciones de iluminación inadecuadas pueden distorsionar o hacer borrosas las imágenes. Además, las imágenes de porta completo obtenidas so de tamaño considerable, complicando aún más el análisis. En general, si bien la preparación, la tinción y la digitalización de las muestras de microscopía médica son fundamentales para el análisis digital, cada uno de estos pasos puede introducir retos adicionales que deben abordarse para garantizar un análisis preciso. Además, convertir un volumen de tejido completo en unas pocas secciones teñidas reduce drásticamente la información 3D disponible e introduce una gran incertidumbre. Las soluciones de aprendizaje profundo (deep learning, DL) son muy prometedoras en el ámbito de la medicina personalizada, pero su impacto clínico a veces se ve obstaculizado por factores como la limitada generalizabilidad, el sobreajuste, la opacidad y la falta de interpretabilidad, además de las preocupaciones éticas y en algunos casos, los incentivos privados. Por otro lado, la variabilidad de las imágenes histológicas complica el desarrollo de métodos robustos de DL. Para superar estos retos, esta tesis presenta una serie de métodos altamente robustos e interpretables basados en principios clásicos de histología, imagen médica, matemáticas y arte, para alinear secciones de microscopía y analizar sus colores. Nuestra primera contribución es ABANICCO, un innovador método de análisis de color que ofrece una segmentación de colores objectiva y no supervisada y permite su posterior refinamiento mediante herramientas fáciles de usar. Se ha demostrado que la precisión y la eficacia de ABANICCO son superiores a las de los métodos existentes de clasificación y segmentación del color, e incluso destaca en la detección y segmentación de objetos completos. ABANICCO puede aplicarse a imágenes de microscopía para detectar áreas teñidas para la cuantificación de biopsias, un aspecto crucial de la investigación de cáncer. La segunda contribución es un método automático y no supervisado de segmentación de tejidos que identifica y elimina el fondo y los artefactos de las imágenes de microscopía, mejorando así el rendimiento de técnicas más sofisticadas de análisis de imagen. Este método es robusto frente a diversas imágenes, tinciones y protocolos de adquisición, y no requiere entrenamiento. La tercera contribución consiste en el desarrollo de métodos novedosos para registrar imágenes histopatológicas de forma eficaz, logrando el equilibrio adecuado entre un registro preciso y la preservación de la morfología local, en función de la aplicación prevista. Como cuarta contribución, los tres métodos mencionados se combinan para crear procedimientos eficientes para la integración completa de datos volumétricos, creando visualizaciones altamente interpretables de toda la información presente en secciones consecutivas de biopsia de tejidos. Esta integración de datos puede tener una gran repercusión en el diagnóstico y el tratamiento de diversas enfermedades, en particular el cáncer de mama, al permitir la detección precoz, la realización de pruebas clínicas precisas, la selección eficaz de tratamientos y la mejora en la comunicación el compromiso con los pacientes. Por último, aplicamos nuestros hallazgos a la integración multimodal de datos y la reconstrucción de tejidos para el análisis preciso de la distribución de elementos químicos en tuberculosis, lo que arroja luz sobre las complejas interacciones entre las bacterias, las células huésped y el sistema inmunitario durante la infección tuberculosa. Este método también aborda problemas como el daño por adquisición, típico de muchas modalidades de imagen. En resumen, esta tesis muestra la aplicación de métodos clásicos de visión por ordenador en el registro de microscopía médica y el análisis de color para abordar los retos únicos de este campo, haciendo hincapié en la visualización eficaz y fácil de datos complejos. Aspiramos a seguir perfeccionando nuestro trabajo con una amplia validación técnica y un mejor análisis de los datos. Los métodos presentados en esta tesis se caracterizan por su claridad, accesibilidad, visualización eficaz de los datos, objetividad y transparencia. Estas características los hacen perfectos para tender puentes robustos entre los investigadores de inteligencia artificial y los clínicos e impulsar así la patología computacional en la práctica y la investigación médicas.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretario: Gonzalo Ricardo Ríos Muñoz.- Vocal: Estíbaliz Gómez de Marisca
    corecore