743 research outputs found

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Machine Learning Approaches for Semantic Segmentation on Partly-Annotated Medical Images

    Get PDF
    Semantic segmentation of medical images plays a crucial role in assisting medical practitioners in providing accurate and swift diagnoses; nevertheless, deep neural networks require extensive labelled data to learn and generalise appropriately. This is a major issue in medical imagery because most of the datasets are not fully annotated. Training models with partly-annotated datasets generate plenty of predictions that belong to correct unannotated areas that are categorised as false positives; as a result, standard segmentation metrics and objective functions do not work correctly, affecting the overall performance of the models. In this thesis, the semantic segmentation of partly-annotated medical datasets is extensively and thoroughly studied. The general objective is to improve the segmentation results of medical images via innovative supervised and semi-supervised approaches. The main contributions of this work are the following. Firstly, a new metric, specifically designed for this kind of dataset, can provide a reliable score to partly-annotated datasets with positive expert feedback in their generated predictions by exploiting all the confusion matrix values except the false positives. Secondly, an innovative approach to generating better pseudo-labels when applying co-training with the disagreement selection strategy. This method expands the pixels in disagreement utilising the combined predictions as a guide. Thirdly, original attention mechanisms based on disagreement are designed for two cases: intra-model and inter-model. These attention modules leverage the disagreement between layers (from the same or different model instances) to enhance the overall learning process and generalisation of the models. Lastly, innovative deep supervision methods improve the segmentation results by training neural networks one subnetwork at a time following the order of the supervision branches. The methods are thoroughly evaluated on several histopathological datasets showing significant improvements

    Robust Brain MRI Image Classification with SIBOW-SVM

    Full text link
    The majority of primary Central Nervous System (CNS) tumors in the brain are among the most aggressive diseases affecting humans. Early detection of brain tumor types, whether benign or malignant, glial or non-glial, is critical for cancer prevention and treatment, ultimately improving human life expectancy. Magnetic Resonance Imaging (MRI) stands as the most effective technique to detect brain tumors by generating comprehensive brain images through scans. However, human examination can be error-prone and inefficient due to the complexity, size, and location variability of brain tumors. Recently, automated classification techniques using machine learning (ML) methods, such as Convolutional Neural Network (CNN), have demonstrated significantly higher accuracy than manual screening, while maintaining low computational costs. Nonetheless, deep learning-based image classification methods, including CNN, face challenges in estimating class probabilities without proper model calibration. In this paper, we propose a novel brain tumor image classification method, called SIBOW-SVM, which integrates the Bag-of-Features (BoF) model with SIFT feature extraction and weighted Support Vector Machines (wSVMs). This new approach effectively captures hidden image features, enabling the differentiation of various tumor types and accurate label predictions. Additionally, the SIBOW-SVM is able to estimate the probabilities of images belonging to each class, thereby providing high-confidence classification decisions. We have also developed scalable and parallelable algorithms to facilitate the practical implementation of SIBOW-SVM for massive images. As a benchmark, we apply the SIBOW-SVM to a public data set of brain tumor MRI images containing four classes: glioma, meningioma, pituitary, and normal. Our results show that the new method outperforms state-of-the-art methods, including CNN

    ViSTORY: Effective Video Storyboard Generation with Visual Keyframes using Discrete Cosine Transform

    Get PDF
    Nowadays, multimedia content utility is increasing rapidly. Multimedia search engines like Google, Yahoo, Bing, etc., are available just a click away to all users. There are around 500-600 hours of video uploads per unit of time to the Internet. So, among other types of multimedia content, such as text and images, video is the most complicated content for indexing, browsing, and retrieval. Videos give more scope for implementation because of their complex and unstructured nature. This paper proposes a new method of video storyboard generation with keyframe extraction in spatial and frequency domains using Discrete Cosine Transform (DCT) for video summarization. It discusses the empirical appraisal of video visual keyframes with t-test analysis in comparison with spatial and frequency domains, resulting in a quick response to customer demands by providing static storyboards. This study proposes a new performance measure as matching frames by analyzing input videos and the standard benchmarks video dataset, i.e., Open Video Project (OVP) and SumMe.  Among all the keyframe extraction techniques, DCT gives higher accuracy and a better matching rate

    Novel deep learning architectures for marine and aquaculture applications

    Get PDF
    Alzayat Saleh's research was in the area of artificial intelligence and machine learning to autonomously recognise fish and their morphological features from digital images. Here he created new deep learning architectures that solved various computer vision problems specific to the marine and aquaculture context. He found that these techniques can facilitate aquaculture management and environmental protection. Fisheries and conservation agencies can use his results for better monitoring strategies and sustainable fishing practices

    Analysis of Cellular and Subcellular Morphology using Machine Learning in Microscopy Images

    Full text link
    Human cells undergo various morphological changes due to progression in the cell-cycle or environmental factors. Classification of these morphological states is vital for effective clinical decisions. Automated classification systems based on machine learning models are data-driven and efficient and help to avoid subjective outcomes. However, the efficacy of these models is highly dependent on the feature description along with the amount and nature of the training data. This thesis presents three studies of automated image-based classification of cellular and subcellular morphologies. The first study presents 3D Sorted Random Projections (SRP) which includes the proposed approach to compute 3D plane information for texture description of 3D nuclear images. The proposed 3D SRP is used to classify nuclear morphology and measure changes in heterochromatin, which in turn helps to characterise cellular states. Classification performance evaluated on 3D images of the human fibroblast and prostate cancer cell lines shows that 3D SRP provides better classification than other feature descriptors. The second study is on imbalanced multiclass and single-label classification of blood cell images. The scarcity of minority sam ples causes a drop in classification performance on minority classes. This study proposes oversampling of minority samples us ing data augmentation approaches, namely mixup, WGAN-div and novel nonlinear mixup, along with a minority class focussed sampling strategy. Classification performance evaluated using F1-score shows that the proposed deep learning framework out performs state-of-the art approaches on publicly available images of human T-lymphocyte cells and red blood cells. The third study is on protein subcellular localisation, which is an imbalanced multiclass and multilabel classification problem. In order to handle data imbalance, this study proposes an oversampling method which includes synthetic images constructed using nonlinear mixup and geometric/colour transformations. The regularisation capability of nonlinear mixup is further improved for protein images. In addition, an imbalance aware sampling strategy is proposed to identify minority and medium classes in the dataset and include them during training. Classification performance evaluated on the Human Protein Atlas Kaggle challenge dataset using F1-score shows that the proposed deep learning framework achieves better predictions than existing methods

    Improving Visual Place Recognition in Changing Environments

    Get PDF
    For many years, the research community has been highly interested in autonomous robotics and its various applications, from healthcare to manufacturing, transportation to construction, and more. An autonomous robot's key challenge is the ability to determine its location. A fundamental research topic in localization is Visual Place Recognition (VPR), a task of detecting a previously visited location through visual input alone. One specific challenge in VPR is dealing with a place's appearance variation across different visits, which can occur due to viewpoint and environmental changes such as illumination, weather, and seasonal variations. While appearance changes already make VPR challenging, a further difficulty is posed by the resource constraints of many robots employed in real-world applications that limit the usability of learning-based techniques, which enable state-of-the-art performance but are computationally expensive. This thesis aims to combine the need for accurate place recognition in changing environments with low resource usage. The work presented here explores different approaches, from local image feature descriptors to Binary Neural Networks (BNN), to improve the computational and energy efficiency of VPR. The best BNN-based VPR descriptor obtained runs up to one order of magnitude faster than many CNN-based and hand-crafted approaches while maintaining comparable performance and expending a small amount of energy to process an image. Specifically, the proposed BNN can process an image 7 to 14 times faster than AlexNet, spending 13\% of the power at most when deployed on a low-end ARM platform. The results in this manuscript are presented using a new performance metric and an evaluation framework designed explicitly for VPR applications aiming at the two-fold purpose of providing meaningful insights into VPR performance and making results easily comparable across the chapters

    Multi-modal Video Content Understanding

    Get PDF
    Video is an important format of information. Humans use videos for a variety of purposes such as entertainment, education, communication, information sharing, and capturing memories. To this date, humankind accumulated a colossal amount of video material online which is freely available. Manual processing at this scale is simply impossible. To this end, many research efforts have been dedicated to the automatic processing of video content. At the same time, human perception of the world is multi-modal. A human uses multiple senses to understand the environment and objects, and their interactions. When watching a video, we perceive the content via both audio and visual modalities, and removing one of these modalities results in less immersive experience. Similarly, if information in both modalities does not correspond, it may create a sense of dissonance. Therefore, joint modelling of multiple modalities (such as audio, visual, and text) within one model is an active research area. In the last decade, the fields of automatic video understanding and multi-modal modelling have seen exceptional progress due to the ubiquitous success of deep learning models and, more recently, transformer-based architectures in particular. Our work draws on these advances and pushes the state-of-the-art of multi-modal video understanding forward. Applications of automatic multi-modal video processing are broad and exciting! For instance, the content-based textual description of a video (video captioning) may allow a visually- or auditory-impaired person to understand the content and, thus, engage in brighter social interactions. However, prior work in video content description relies on the visual input alone, missing vital information only available in the audio stream. To this end, we proposed two novel multi-modal transformer models that encode audio and visual interactions simultaneously. More specifically, first, we introduced a late-fusion multi-modal transformer that is highly modular and allows the processing of an arbitrary set of modalities. Second, an efficient bi-modal transformer was presented to encode audio-visual cues starting from the lower network layers allowing more rich audio-visual features and stronger performance as a result. Another application is the automatic visually-guided sound generation that might help professional sound (foley) designers who spend hours searching a database for relevant audio for a movie scene. Previous approaches for automatic conditional audio generation support only one class (e. g. “dog barking”), while real-life applications may require generation for hundreds of data classes and one would need to train one model for every data class which can be infeasible. To bridge this gap, we introduced a novel two-stage model that, first, efficiently encodes audio as a set of codebook vectors (i. e. trains to make “building blocks”) and, then, learns to sample these audio vectors given visual inputs to make a relevant audio track for this visual input. Moreover, we studied the automatic evaluation of the conditional audio generation model and proposed metrics that measure both quality and relevance of the generated samples. Finally, as video editing is becoming more common among non-professionals due to the increased popularity of such services as YouTube, automatic assistance during video editing grows in demand, e. g. off-sync detection between audio and visual tracks. Prior work in audio-visual synchronization was devoted to solving the task on lip-syncing datasets with “dense” signals, such as interviews and presentations. In such videos, synchronization cues occur “densely” across time, and it is enough to process just a few tens of a second to synchronize the tracks. In contrast, opendomain videos mostly have only “sparse” cues that occur just once in a seconds-long video clip (e. g. “chopping wood”). To address this, we: a) proposed a novel dataset with “sparse” sounds; b) designed a model which can efficiently encode seconds-long audio-visual tracks in a small set of “learnable selectors” that is, then, used for synchronization. In addition, we explored the temporal artefacts that common audio and video compression algorithms leave in data streams. To prevent a model from learning to rely on these artefacts, we introduced a list of recommendations on how to mitigate them. This thesis provides the details of the proposed methodologies as well as a comprehensive overview of advances in relevant fields of multi-modal video understanding. In addition, we provide a discussion of potential research directions that can bring significant contributions to the field

    Machine learning approaches to video activity recognition: from computer vision to signal processing

    Get PDF
    244 p.La investigación presentada se centra en técnicas de clasificación para dos tareas diferentes, aunque relacionadas, de tal forma que la segunda puede ser considerada parte de la primera: el reconocimiento de acciones humanas en vídeos y el reconocimiento de lengua de signos.En la primera parte, la hipótesis de partida es que la transformación de las señales de un vídeo mediante el algoritmo de Patrones Espaciales Comunes (CSP por sus siglas en inglés, comúnmente utilizado en sistemas de Electroencefalografía) puede dar lugar a nuevas características que serán útiles para la posterior clasificación de los vídeos mediante clasificadores supervisados. Se han realizado diferentes experimentos en varias bases de datos, incluyendo una creada durante esta investigación desde el punto de vista de un robot humanoide, con la intención de implementar el sistema de reconocimiento desarrollado para mejorar la interacción humano-robot.En la segunda parte, las técnicas desarrolladas anteriormente se han aplicado al reconocimiento de lengua de signos, pero además de ello se propone un método basado en la descomposición de los signos para realizar el reconocimiento de los mismos, añadiendo la posibilidad de una mejor explicabilidad. El objetivo final es desarrollar un tutor de lengua de signos capaz de guiar a los usuarios en el proceso de aprendizaje, dándoles a conocer los errores que cometen y el motivo de dichos errores
    corecore