329 research outputs found

    Accelerated genetic algorithm based on search-space decomposition for change detection in remote sensing images

    Get PDF
    Detecting change areas among two or more remote sensing images is a key technique in remote sensing. It usually consists of generating and analyzing a difference image thus to produce a change map. Analyzing the difference image to obtain the change map is essentially a binary classification problem, and can be solved by optimization algorithms. This paper proposes an accelerated genetic algorithm based on search-space decomposition (SD-aGA) for change detection in remote sensing images. Firstly, the BM3D algorithm is used to preprocess the remote sensing image to enhance useful information and suppress noises. The difference image is then obtained using the logarithmic ratio method. Secondly, after saliency detection, fuzzy c-means algorithm is conducted on the salient region detected in the difference image to identify the changed, unchanged and undetermined pixels. Only those undetermined pixels are considered by the optimization algorithm, which reduces the search space significantly. Inspired by the idea of the divide-and-conquer strategy, the difference image is decomposed into sub-blocks with a method similar to down-sampling, where only those undetermined pixels are analyzed and optimized by SD-aGA in parallel. The category labels of the undetermined pixels in each sub-block are optimized according to an improved objective function with neighborhood information. Finally the decision results of the category labels of all the pixels in the sub-blocks are remapped to their original positions in the difference image and then merged globally. Decision fusion is conducted on each pixel based on the decision results in the local neighborhood to produce the final change map. The proposed method is tested on six diverse remote sensing image benchmark datasets and compared against six state-of-the-art methods. Segmentations on the synthetic image and natural image corrupted by different noise are also carried out for comparison. Results demonstrate the excellent performance of the proposed SD-aGA on handling noises and detecting the changed areas accurately. In particular, compared with the traditional genetic algorithm, SD-aGA can obtain a much higher degree of detection accuracy with much less computational time

    Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities

    Full text link
    Recent advancements in AI applications to healthcare have shown incredible promise in surpassing human performance in diagnosis and disease prognosis. With the increasing complexity of AI models, however, concerns regarding their opacity, potential biases, and the need for interpretability. To ensure trust and reliability in AI systems, especially in clinical risk prediction models, explainability becomes crucial. Explainability is usually referred to as an AI system's ability to provide a robust interpretation of its decision-making logic or the decisions themselves to human stakeholders. In clinical risk prediction, other aspects of explainability like fairness, bias, trust, and transparency also represent important concepts beyond just interpretability. In this review, we address the relationship between these concepts as they are often used together or interchangeably. This review also discusses recent progress in developing explainable models for clinical risk prediction, highlighting the importance of quantitative and clinical evaluation and validation across multiple common modalities in clinical practice. It emphasizes the need for external validation and the combination of diverse interpretability methods to enhance trust and fairness. Adopting rigorous testing, such as using synthetic datasets with known generative factors, can further improve the reliability of explainability methods. Open access and code-sharing resources are essential for transparency and reproducibility, enabling the growth and trustworthiness of explainable research. While challenges exist, an end-to-end approach to explainability in clinical risk prediction, incorporating stakeholders from clinicians to developers, is essential for success

    Classification of Explainable Artificial Intelligence Methods through Their Output Formats

    Get PDF
    Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation

    Computational Intelligence Techniques in Visual Pattern Recognition

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Machine Learning Methods for Effectively Discovering Complex Relationships in Graph Data

    Get PDF
    Graphs are extensively employed in many systems due to their capability to capture the interactions (edges) among data (nodes) in many real-life scenarios. Social networks, biological networks and molecular graphs are some of the domains where data have inherent graph structural information. Built graphs can be used to make predictions in Machine Learning (ML) such as node classifications, link predictions, graph classifications, etc. But, existing ML algorithms hold a core assumption that data instances are independent of each other and hence prevent incorporating graph information into ML. This irregular and variable sized nature of non-Euclidean data makes learning underlying patterns of the graph more sophisticated. One approach is to convert the graph information into a lower dimensional space and use traditional learning methods on the reduced space. Meanwhile, Deep Learning has better performance than ML due to convolutional layers and recurrent layers which consider simple correlations in spatial and temporal data, respectively. This proves the importance of taking data interrelationships into account and Graph Convolutional Networks (GCNs) are inspired by this fact to exploit the structure of graphs to make better inference in both node-centric and graph-centric applications. In this dissertation, the graph based ML prediction is addressed in terms of both node classification and link prediction tasks. At first, GCN is thoroughly studied and compared with other graph embedding methods specific to biological networks. Next, we present several new GCN algorithms to improve the prediction performance related to biomedical networks and medical imaging tasks. A circularRNA (circRNA) and disease association network is modeled for both node classification and link prediction tasks to predict diseases relevant to circRNAs to demonstrate the effectiveness of graph convolutional learning. A GCN based chest X-ray image classification outperforms state-of-the-art transfer learning methods. Next, the graph representation is used to analyze the feature dependencies of data and select an optimal feature subset which respects the original data structure. Finally, the usability of this algorithm is discussed in identifying disease specific genes by exploiting gene-gene interactions

    Salient Feature Selection Using Feed-Forward Neural Networks and Signal-to-Noise Ratios with a Focus Toward Network Threat Detection and Risk Level identification

    Get PDF
    Most communication in the modern era takes place over some type of cyber network, to include telecommunications, banking, public utilities, and health systems. Information gained from illegitimate network access can be used to create catastrophic effects at the individual, corporate, national, and even international levels, making cyber security a top priority. Cyber networks frequently encounter amounts of network traffic too large to process real-time threat detection efficiently. Reducing the amount of information necessary for a network monitor to determine the presence of a threat would likely aide in keeping networks more secure. This thesis uses network traffic data captured during the Department of Defense Cyber Defense Exercise to determine which features of network traffic are salient to detecting and classifying threats. After generating a set of 248 features from the capture data, feed-forward artificial neural networks were generated and signal-to-noise ratios were used to prune the feature set to 18 features while still achieving an accuracy ranging from 83% - 94%. The salient features primarily come from the transport layer section of the network traffic data and involve the client/server connection parameters, size of the initial data sent, and number of segments and/or bytes sent in the flow

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Recent advancement in Disease Diagnostic using machine learning: Systematic survey of decades, comparisons, and challenges

    Full text link
    Computer-aided diagnosis (CAD), a vibrant medical imaging research field, is expanding quickly. Because errors in medical diagnostic systems might lead to seriously misleading medical treatments, major efforts have been made in recent years to improve computer-aided diagnostics applications. The use of machine learning in computer-aided diagnosis is crucial. A simple equation may result in a false indication of items like organs. Therefore, learning from examples is a vital component of pattern recognition. Pattern recognition and machine learning in the biomedical area promise to increase the precision of disease detection and diagnosis. They also support the decision-making process's objectivity. Machine learning provides a practical method for creating elegant and autonomous algorithms to analyze high-dimensional and multimodal bio-medical data. This review article examines machine-learning algorithms for detecting diseases, including hepatitis, diabetes, liver disease, dengue fever, and heart disease. It draws attention to the collection of machine learning techniques and algorithms employed in studying conditions and the ensuing decision-making process

    Cancer prediction using graph-based gene selection and explainable classifier

    Get PDF
    Several Artificial Intelligence-based models have been developed for cancer prediction. In spite of the promise of artificial intelligence, there are very few models which bridge the gap between traditional human-centered prediction and the potential future of machine-centered cancer prediction. In this study, an efficient and effective model is developed for gene selection and cancer prediction. Moreover, this study proposes an artificial intelligence decision system to provide physicians with a simple and human-interpretable set of rules for cancer prediction. In contrast to previous deep learning-based cancer prediction models, which are difficult to explain to physicians due to their black-box nature, the proposed prediction model is based on a transparent and explainable decision forest model. The performance of the developed approach is compared to three state-of-the-art cancer prediction including TAGA, HPSO and LL. The reported results on five cancer datasets indicate that the developed model can improve the accuracy of cancer prediction and reduce the execution time
    corecore