221 research outputs found

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Medical Systems Data Security and Biometric Authentication in Public Cloud Servers

    Get PDF
    Advances in distributed computing and virtualization allowed cloud computing to establish itself as a popular data management and storage option for organizations. However, unclear safeguards, practices, as well as the evolution of legislation around privacy and data protection, contribute to data security being one of the main concerns in adopting this paradigm. Another important aspect hindering the absolute success of cloud computing is the ability to ensure the digital identity of users and protect the virtual environment through logical access controls while avoiding the compromise of its authentication mechanism or storage medium. Therefore, this paper proposes a system that addresses data security wherein unauthorized access to data stored in a public cloud is prevented by applying a fragmentation technique and a NoSQL database. Moreover, a system for managing and authenticating users with multimodal biometrics is also suggested along with a mechanism to ensure the protection of biometric features. When compared with encryption, the proposed fragmentation method indicates better latency performance, highlighting its strong potential use-case in environments with lower latency requirements such as the healthcare IT infrastructure

    Incorporating fuzzy-based methods to deep learning models for semantic segmentation

    Get PDF
    This thesis focuses on improving the workflow of semantic segmentation through a combination of reducing model complexity, improving segmentation accuracy, and making semantic segmentation results more reliable and robust. Semantic segmentation refers to pixel-level classification, the objective of which is to classify each pixel of the input image into different categories. The process typically consists of three steps: model construction, training, and application. Thus, in this thesis, fuzzy-based techniques are utilized in the aforementioned three steps to improve semantic segmentation workflow . The widely-used semantic segmentation models normally extract and aggregate spatial information and channel-wise features simultaneously. In order to achieve promising segmentation performance, it is required to involve numerous learnable parameters, which increase the model's complexity. Thus, decoupling the information fusion tasks is an important approach in the exploration of semantic segmentation models. Fuzzy integrals are effective for fusing information, and some special fuzzy integral operators (OWA) are free of parameters and easy to implement in deep-learning models. Therefore, a novel fuzzy integral module that includes an additional convolutional layer for feature map dimensionality reduction and an OWA layer for information fusion across feature channels is designed. The proposed fuzzy integral module can be flexibly integrated into existing semantic segmentation models, and then help reduce parameters and save memory. Following the exploration of semantic segmentation models, the collected data is used to train the model. Note that the precise delineation of object boundaries is a key aspect of semantic segmentation. In order to make the segmentation model pay more attention to the boundary, a special boundary-wise loss function is desirable in the segmentation model training phase. Fuzzy rough sets are normally utilized to measure the relationship between two sets. Thus, in this thesis, to improve the boundary accuracy, fuzzy rough sets are leveraged to calculate a boundary-wise loss, which is the difference between the boundary sets of the predicted image and the ground truth image. After completing the training process with the proposed novel loss, the next step for semantic segmentation is to apply the pre-trained segmentation model to segment new images. One challenge is that there are no ground truth images to quantify the segmentation quality in the real-world application of semantic segmentation models. Therefore, it is crucial to design a quality quantification algorithm to infer image-level segmentation performance and improve the credibility of semantic segmentation models. In this thesis, a novel quality quantification algorithm based on fuzzy uncertainty is proposed as part of the model inference process without accessing ground truth images. Moreover, to further explore the practical application of the proposed quality quantification algorithm in clinical settings, this thesis goes beyond public datasets and delves into a real-world case study involving cardiac MRI segmentation. Additionally, as clinicians also provide the level of uncertainty to measure their confidence when annotating to generate ground truth images (human-based uncertainty), the correlation between human-based uncertainty and AI-based uncertainty (calculated by the proposed quality quantification algorithm) is deeply investigated. Comprehensive experiments are conducted in this thesis to demonstrate that the integration of fuzzy-based technologies can enhance the efficiency, accuracy, and reliability of semantic segmentation models compared to those without such methods

    Ultrasound Guidance in Perioperative Care

    Get PDF

    DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System

    Full text link
    A key challenge in eXplainable Artificial Intelligence is the well-known tradeoff between the transparency of an algorithm (i.e., how easily a human can directly understand the algorithm, as opposed to receiving a post-hoc explanation), and its accuracy. We report on the design of a new deep network that achieves improved transparency without sacrificing accuracy. We design a deep convolutional neuro-fuzzy inference system (DCNFIS) by hybridizing fuzzy logic and deep learning models and show that DCNFIS performs as accurately as three existing convolutional neural networks on four well-known datasets. We furthermore that DCNFIS outperforms state-of-the-art deep fuzzy systems. We then exploit the transparency of fuzzy logic by deriving explanations, in the form of saliency maps, from the fuzzy rules encoded in DCNFIS. We investigate the properties of these explanations in greater depth using the Fashion-MNIST dataset

    Ultrasound Guidance in Perioperative Care

    Get PDF

    Mathematical Fuzzy Logic in the Emerging Fields of Engineering, Finance, and Computer Sciences

    Get PDF
    Mathematical fuzzy logic (MFL) specifically targets many-valued logic and has significantly contributed to the logical foundations of fuzzy set theory (FST). It explores the computational and philosophical rationale behind the uncertainty due to imprecision in the backdrop of traditional mathematical logic. Since uncertainty is present in almost every real-world application, it is essential to develop novel approaches and tools for efficient processing. This book is the collection of the publications in the Special Issue “Mathematical Fuzzy Logic in the Emerging Fields of Engineering, Finance, and Computer Sciences”, which aims to cover theoretical and practical aspects of MFL and FST. Specifically, this book addresses several problems, such as:- Industrial optimization problems- Multi-criteria decision-making- Financial forecasting problems- Image processing- Educational data mining- Explainable artificial intelligence, etc

    Fuzzy Layered Convolution Neutral Network for Feature Level Fusion Based On Multimodal Sentiment Classification

    Get PDF
    Multimodal sentiment analysis (MSA) is one of the core research topics of natural language processing (NLP). MSA has become a challenge for scholars and is equally complicated for an appliance to comprehend. One study that supports MS difficulties is the MSA, which is learning opinions, emotions, and attitudes in an audio-visual format. In order words, using such diverse modalities to obtain opinions and identify emotions is necessary. Such utilization can be achieved via modality data fusion, such as feature fusion. In handling the data fusion of such diverse modalities while obtaining high performance, a typical machine learning algorithm is Deep Learning (DL), particularly the Convolutional Neutral Network (CNN), which has the capacity to handle tasks of great intricacy and difficulty. In this paper, we present a CNN architecture with an integrated layer via fuzzy methodologies for MSA, a task yet to be explored in improving the accuracy performance of CNN for diverse inputs. Experiments conducted on a benchmark multimodal dataset, MOSI, obtaining 37.5% and 81% on seven (7) class and binary classification respectively, reveals an improved accuracy performance compared with the typical CNN, which acquired 28.9% and 78%, respectively
    corecore