52 research outputs found

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks

    Full text link
    Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data. Unlike conventional neural networks or updated versions of Recurrent Neural Networks (RNNs) such as Long Short-Term Memory (LSTM), transformer models excel in handling long dependencies between input sequence elements and enable parallel processing. As a result, transformer-based models have attracted substantial interest among researchers in the field of artificial intelligence. This can be attributed to their immense potential and remarkable achievements, not only in Natural Language Processing (NLP) tasks but also in a wide range of domains, including computer vision, audio and speech processing, healthcare, and the Internet of Things (IoT). Although several survey papers have been published highlighting the transformer's contributions in specific fields, architectural differences, or performance evaluations, there is still a significant absence of a comprehensive survey paper encompassing its major applications across various domains. Therefore, we undertook the task of filling this gap by conducting an extensive survey of proposed transformer models from 2017 to 2022. Our survey encompasses the identification of the top five application domains for transformer-based models, namely: NLP, Computer Vision, Multi-Modality, Audio and Speech Processing, and Signal Processing. We analyze the impact of highly influential transformer-based models in these domains and subsequently classify them based on their respective tasks using a proposed taxonomy. Our aim is to shed light on the existing potential and future possibilities of transformers for enthusiastic researchers, thus contributing to the broader understanding of this groundbreaking technology

    Ontology Enrichment from Free-text Clinical Documents: A Comparison of Alternative Approaches

    Get PDF
    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships, as well as difficulty in updating the ontology as domain knowledge changes. Methodologies developed in the fields of Natural Language Processing (NLP), Information Extraction (IE), Information Retrieval (IR), and Machine Learning (ML) provide techniques for automating the enrichment of ontology from free-text documents. In this dissertation, I extended these methodologies into biomedical ontology development. First, I reviewed existing methodologies and systems developed in the fields of NLP, IR, and IE, and discussed how existing methods can benefit the development of biomedical ontologies. This previously unconducted review was published in the Journal of Biomedical Informatics. Second, I compared the effectiveness of three methods from two different approaches, the symbolic (the Hearst method) and the statistical (the Church and Lin methods), using clinical free-text documents. Third, I developed a methodological framework for Ontology Learning (OL) evaluation and comparison. This framework permits evaluation of the two types of OL approaches that include three OL methods. The significance of this work is as follows: 1) The results from the comparative study showed the potential of these methods for biomedical ontology enrichment. For the two targeted domains (NCIT and RadLex), the Hearst method revealed an average of 21% and 11% new concept acceptance rates, respectively. The Lin method produced a 74% acceptance rate for NCIT; the Church method, 53%. As a result of this study (published in the Journal of Methods of Information in Medicine), many suggested candidates have been incorporated into the NCIT; 2) The evaluation framework is flexible and general enough that it can analyze the performance of ontology enrichment methods for many domains, thus expediting the process of automation and minimizing the likelihood that key concepts and relationships would be missed as domain knowledge evolves

    Combining local features and region segmentation: methods and applications

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 23-01-2020Esta tesis tiene embargado el acceso al texto completo hasta el 23-07-2021Muchas y muy diferentes son las propuestas que se han desarrollado en el área de la visión artificial para la extracción de información de las imágenes y su posterior uso. Entra las más destacadas se encuentran las conocidas como características locales, del inglés local features, que detectan puntos o áreas de la imagen con ciertas características de interés, y las describen usando información de su entorno (local). También destacan las regiones en este área, y en especial este trabajo se ha centrado en los segmentadores en regiones, cuyo objetivo es agrupar la información de la imagen atendiendo a diversos criterios. Pese al enorme potencial de estas técnicas, y su probado éxito en diversas aplicaciones, su definición lleva implícita una serie de limitaciones funcionales que les han impedido exportar sus capacidades a otras áreas de aplicación. Se pretende impulsar el uso de estas herramientas en dichas aplicaciones, y por tanto mejorar los resultados del estado del arte, mediante la propuesta de un marco de desarrollo de nuevas soluciones. En concreto, la hipótesis principal del proyecto es que las capacidades de las características locales y los segmentadores en regiones son complementarias, y que su combinación, realizada de la forma adecuada, las maximiza a la vez que minimiza sus limitaciones. El principal objetivo, y por tanto la principal contribución del proyecto, es validar dicha hipótesis mediante la propuesta de un marco de desarrollo de nuevas soluciones combinando características locales y segmentadores para técnicas con capacidades mejoradas. Al tratarse de un marco de combinación de dos técnicas, el proceso de validación se ha llevado a cabo en dos pasos. En primer lugar se ha planteado el caso del uso de segmentadores en regiones para mejorar las características locales. Para verificar la viabilidad y el éxito de esta combinación se ha desarrollado una propuesta específica, SP-SIFT, que se ha validado tanto a nivel experimental como a nivel de aplicación real, en concreto como técnica principal de algoritmos de seguimiento de objetos. En segundo lugar, se ha planteado el caso de uso de características locales para mejorar los segmentadores en regiones. Para verificar la viabilidad y el éxito de esta combinación se ha desarrollado una propuesta específica, LF-SLIC, que se ha validado tanto a nivel experimental como a nivel de aplicación real, en concreto como técnica principal de un algoritmo de segmentación de lesiones pigmentadas de la piel. Los resultados conceptuales han probado que las técnicas mejoran a nivel de capacidades. Los resultados aplicados han probado que estas mejoras permiten el uso de estas técnicas en aplicaciones donde antes no tenían éxito. Con ello, se ha considerado la hipótesis validada, y por tanto exitosa la definición de un marco para el desarrollo de nuevas técnicas específicas con capacidades mejoradas. En conclusión, la principal aportación de la tesis es el marco de combinación de técnicas, plasmada en sus dos propuestas específicas: características locales mejoradas con segmentadores y segmentadores mejorados con características locales, y en el éxito conseguido en sus aplicaciones.A huge number of proposals have been developed in the area of computer vision for information extraction from images, and its further use. One of the most prevalent solutions are those known as local features. They detect points or areas of the image with certain characteristics of interest, and describe them using information from their (local) environment. The regions also stand out in the area, and especially this work has focused on the region segmentation algorithms, whose objective is to group the information of the image according to di erent criteria. Despite the enormous potential of these techniques, and their proven success in a number of applications, their de nition implies a series of functional limitations that have prevented them from exporting their capabilities to other application areas. In this thesis, it is intended to promote the use of these tools in these applications, and therefore improve the results of the state of the art, by proposing a framework for developing new solutions. Speci cally, the main hypothesis of the project is that the capacities of the local features and the region segmentation algorithms are complementary, and thus their combination, carried out in the right way, maximizes them while minimizing their limitations. The main objective, and therefore the main contribution of the thesis, is to validate this hypothesis by proposing a framework for developing new solutions combining local features and region segmentation algorithms, obtaining solutions with improved capabilities. As the hypothesis is proposing to combine two techniques, the validation process has been carried out in two steps. First, the use case of region segmentation algorithms enhancing local features. In order to verify the viability and success of this combination, a speci c proposal, SP-SIFT, was been developed. This proposal was validated both experimentally and in a real application scenario, speci cally as the main technique of object tracking algorithms. Second, the use case of enhancing region segmentation algorithm with local features. In order to verify the viability and success of this combination, a speci c proposal, LF-SLIC, was developed. The proposal was validated both experimentally and in a real application scenario, speci cally as the main technique of a pigmented skin lesions segmentation algorithm. The conceptual results proved that the techniques improve at the capabilities level. The application results proved that these improvements allow the use of this techniques in applications where they were previously unsuccessful. Thus, the hypothesis can be considered validated, and therefore the de nition of a framework for the development of new techniques with improved capabilities can be considered successful. In conclusion, the main contribution of the thesis is the framework for the combination of techniques, embodied in the two speci c proposals: enhanced local features with region segmentation algorithms, and region segmentation algorithms enhanced with local features; and in the success achieved in their applications.The work described in this Thesis was carried out within the Video Processing and Understanding Lab at the Department of Tecnología Electrónica y de las Comunicaciones, Escuela Politécnica Superior, Universidad Autónoma de Madrid (from 2014 to 2019). It was partially supported by the Spanish Government (TEC2014-53176-R, HAVideo)

    Automatic analysis of retinal images to aid in the diagnosis and grading of diabetic retinopathy

    Get PDF
    Diabetic retinopathy (DR) is the most common complication of diabetes mellitus and one of the leading causes of preventable blindness in the adult working population. Visual loss can be prevented from the early stages of DR, when the treatments are effective. Therefore, early diagnosis is paramount. However, DR may be clinically asymptomatic until the advanced stage, when vision is already affected and treatment may become difficult. For this reason, diabetic patients should undergo regular eye examinations through screening programs. Traditionally, DR screening programs are run by trained specialists through visual inspection of the retinal images. However, this manual analysis is time consuming and expensive. With the increasing incidence of diabetes and the limited number of clinicians and sanitary resources, the early detection of DR becomes non-viable. For this reason, computed-aided diagnosis (CAD) systems are required to assist specialists for a fast, reliable diagnosis, allowing to reduce the workload and the associated costs. We hypothesize that the application of novel, automatic algorithms for fundus image analysis could contribute to the early diagnosis of DR. Consequently, the main objective of the present Doctoral Thesis is to study, design and develop novel methods based on the automatic analysis of fundus images to aid in the screening, diagnosis, and treatment of DR. In order to achieve the main goal, we built a private database and used five retinal public databases: DRIMDB, DIARETDB1, DRIVE, Messidor and Kaggle. The stages of fundus image processing covered in this Thesis are: retinal image quality assessment (RIQA), the location of the optic disc (OD) and the fovea, the segmentation of RLs and EXs, and the DR severity grading. RIQA was studied with two different approaches. The first approach was based on the combination of novel, global features. Results achieved 91.46% accuracy, 92.04% sensitivity, and 87.92% specificity using the private database. We developed a second approach aimed at RIQA based on deep learning. We achieved 95.29% accuracy with the private database and 99.48% accuracy with the DRIMDB database. The location of the OD and the fovea was performed using a combination of saliency maps. The proposed methods were evaluated over the private database and the public databases DRIVE, DIARETDB1 and Messidor. For the OD, we achieved 100% accuracy for all databases except Messidor (99.50%). As for the fovea location, we also reached 100% accuracy for all databases except Messidor (99.67%). The joint segmentation of RLs and EXs was accomplished by decomposing the fundus image into layers. Results were computed per pixel and per image. Using the private database, 88.34% per-image accuracy (ACCi) was reached for the RL detection and 95.41% ACCi for EX detection. An additional method was proposed for the segmentation of RLs based on superpixels. Evaluating this method with the private database, we obtained 84.45% ACCi. Results were validated using the DIARETDB1 database. Finally, we proposed a deep learning framework for the automatic DR severity grading. The method was based on a novel attention mechanism which performs a separate attention of the dark and the bright structures of the retina. The Kaggle DR detection dataset was used for development and validation. The International Clinical DR Scale was considered, which is made up of 5 DR severity levels. Classification results for all classes achieved 83.70% accuracy and a Quadratic Weighted Kappa of 0.78. The methods proposed in this Doctoral Thesis form a complete, automatic DR screening system, contributing to aid in the early detection of DR. In this way, diabetic patients could receive better attention for their ocular health avoiding vision loss. In addition, the workload of specialists could be relieved while healthcare costs are reduced.La retinopatía diabética (RD) es la complicación más común de la diabetes mellitus y una de las principales causas de ceguera prevenible en la población activa adulta. El diagnóstico precoz es primordial para prevenir la pérdida visual. Sin embargo, la RD es clínicamente asintomática hasta etapas avanzadas, cuando la visión ya está afectada. Por eso, los pacientes diabéticos deben someterse a exámenes oftalmológicos periódicos a través de programas de cribado. Tradicionalmente, estos programas están a cargo de especialistas y se basan de la inspección visual de retinografías. Sin embargo, este análisis manual requiere mucho tiempo y es costoso. Con la creciente incidencia de la diabetes y la escasez de recursos sanitarios, la detección precoz de la RD se hace inviable. Por esta razón, se necesitan sistemas de diagnóstico asistido por ordenador (CAD) que ayuden a los especialistas a realizar un diagnóstico rápido y fiable, que permita reducir la carga de trabajo y los costes asociados. El objetivo principal de la presente Tesis Doctoral es estudiar, diseñar y desarrollar nuevos métodos basados en el análisis automático de retinografías para ayudar en el cribado, diagnóstico y tratamiento de la RD. Las etapas estudiadas fueron: la evaluación de la calidad de la imagen retiniana (RIQA), la localización del disco óptico (OD) y la fóvea, la segmentación de RL y EX y la graduación de la severidad de la RD. RIQA se estudió con dos enfoques diferentes. El primer enfoque se basó en la combinación de características globales. Los resultados lograron una precisión del 91,46% utilizando la base de datos privada. El segundo enfoque se basó en aprendizaje profundo. Logramos un 95,29% de precisión con la base de datos privada y un 99,48% con la base de datos DRIMDB. La localización del OD y la fóvea se realizó mediante una combinación de mapas de saliencia. Los métodos propuestos fueron evaluados sobre la base de datos privada y las bases de datos públicas DRIVE, DIARETDB1 y Messidor. Para el OD, logramos una precisión del 100% para todas las bases de datos excepto Messidor (99,50%). En cuanto a la ubicación de la fóvea, también alcanzamos un 100% de precisión para todas las bases de datos excepto Messidor (99,67%). La segmentación conjunta de RL y EX se logró descomponiendo la imagen del fondo de ojo en capas. Utilizando la base de datos privada, se alcanzó un 88,34% de precisión por imagen (ACCi) para la detección de RL y un 95,41% de ACCi para la detección de EX. Se propuso un método adicional para la segmentación de RL basado en superpíxeles. Evaluando este método con la base de datos privada, obtuvimos 84.45% ACCi. Los resultados se validaron utilizando la base de datos DIARETDB1. Finalmente, propusimos un método de aprendizaje profundo para la graduación automática de la gravedad de la DR. El método se basó en un mecanismo de atención. Se utilizó la base de datos Kaggle y la Escala Clínica Internacional de RD (5 niveles de severidad). Los resultados de clasificación para todas las clases alcanzaron una precisión del 83,70% y un Kappa ponderado cuadrático de 0,78. Los métodos propuestos en esta Tesis Doctoral forman un sistema completo y automático de cribado de RD, contribuyendo a ayudar en la detección precoz de la RD. De esta forma, los pacientes diabéticos podrían recibir una mejor atención para su salud ocular evitando la pérdida de visión. Además, se podría aliviar la carga de trabajo de los especialistas al mismo tiempo que se reducen los costes sanitarios.Escuela de DoctoradoDoctorado en Tecnologías de la Información y las Telecomunicacione

    AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD)

    Get PDF
    This book is a collection of the accepted papers presented at the Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD) in conjunction with the 36th AAAI Conference on Artificial Intelligence 2022. During AIBSD 2022, the attendees addressed the existing issues of data bias and scarcity in Artificial Intelligence and discussed potential solutions in real-world scenarios. A set of papers presented at AIBSD 2022 is selected for further publication and included in this book

    Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges

    Full text link
    Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas. This fascination extends particularly to the Internet of Things (IoT), a landscape characterized by the interconnection of countless devices, sensors, and systems, collectively gathering and sharing data to enable intelligent decision-making and automation. This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the IoT. Specifically, it starts by outlining the fundamental principles of IoT and the critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it delves into AGI fundamentals, culminating in the formulation of a conceptual framework for AGI's seamless integration within IoT. The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education. However, adapting AGI to resource-constrained IoT settings necessitates dedicated research efforts. Furthermore, the paper addresses constraints imposed by limited computing resources, intricacies associated with large-scale IoT communication, as well as the critical concerns pertaining to security and privacy

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
    corecore