405 research outputs found

    On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval

    Full text link
    The problem of cross-modal retrieval from multimedia repositories is considered. This problem addresses the design of retrieval systems that support queries across content modalities, e.g. using an image to search for texts. A mathematical formulation is proposed, equating the design of cross-modal retrieval systems to that of isomorphic feature spaces for different content modalities. Two hypotheses are then investigated, regarding the fundamental attributes of these spaces. The first is that low-level cross-modal correlations should be accounted for. The second is that the space should enable semantic abstraction. Three new solutions to the cross-modal retrieval problem are then derived from these hypotheses: correlation matching (CM), an unsupervised method which models cross-modal correlations, semantic matching (SM), a supervised technique that relies on semantic representation, and semantic correlation matching (SCM), which combines both. An extensive evaluation of retrieval performance is conducted to test the validity of the hypotheses. All approaches are shown successful for text retrieval in response to image queries and vice-versa. It is concluded that both hypotheses hold, in a complementary form, although the evidence in favor of the abstraction hypothesis is stronger than that for correlation

    Quantum Cognitively Motivated Decision Fusion for Video Sentiment Analysis

    Full text link
    Video sentiment analysis as a decision-making process is inherently complex, involving the fusion of decisions from multiple modalities and the so-caused cognitive biases. Inspired by recent advances in quantum cognition, we show that the sentiment judgment from one modality could be incompatible with the judgment from another, i.e., the order matters and they cannot be jointly measured to produce a final decision. Thus the cognitive process exhibits "quantum-like" biases that cannot be captured by classical probability theories. Accordingly, we propose a fundamentally new, quantum cognitively motivated fusion strategy for predicting sentiment judgments. In particular, we formulate utterances as quantum superposition states of positive and negative sentiment judgments, and uni-modal classifiers as mutually incompatible observables, on a complex-valued Hilbert space with positive-operator valued measures. Experiments on two benchmarking datasets illustrate that our model significantly outperforms various existing decision level and a range of state-of-the-art content-level fusion approaches. The results also show that the concept of incompatibility allows effective handling of all combination patterns, including those extreme cases that are wrongly predicted by all uni-modal classifiers.Comment: The uploaded version is a preprint of the accepted AAAI-21 pape

    Decoding Medical Dramas: Identifying Isotopies through Multimodal Classification

    Get PDF
    The rise in processing power, combined with advancements in machine learning, has resulted in an increase in the use of computational methods for automated content analysis. Although human coding is more effective for handling complex variables at the core of media studies, audiovisual content is often understudied because analyzing it is difficult and time-consuming. The present work sets out to address this issue by experimenting with unimodal and multimodal transformer-based models in an attempt to automatically classify segments from the popular medical TV drama Grey's Anatomy into three narrative categories, also referred to as isotopies. To approach the task, this study explores two different classification approaches: the first approach is to employ a single multiclass classifier, while the second involves using the one-vs-the-rest approach to decompose the multiclass task. We investigate both approaches in unimodal and multimodal settings, with the aim of identifying the most effective combination of the two. The results of the experiments can be considered to be promising, as the multiclass multimodal approach results in an F1 score of 0.723, a noticeable improvement over the F1 of 0.684 obtained by the best unimodal approach, a one-vs-the-rest model based on text. This provides support for the hypothesis that visual and textual modalities can complement each other and result in a better-performing model, which highlights the potential of multimodal approaches for narrative classification in the context of medical dramas

    Multi-Modal Similarity Learning for 3D Deformable Registration of Medical Images

    Get PDF
    Alors que la perspective de la fusion d images médicales capturées par des systèmes d imageries de type différent est largement contemplée, la mise en pratique est toujours victime d un obstacle théorique : la définition d une mesure de similarité entre les images. Des efforts dans le domaine ont rencontrés un certain succès pour certains types d images, cependant la définition d un critère de similarité entre les images quelle que soit leur origine et un des plus gros défis en recalage d images déformables. Dans cette thèse, nous avons décidé de développer une approche générique pour la comparaison de deux types de modalités donnés. Les récentes avancées en apprentissage statistique (Machine Learning) nous ont permis de développer des solutions innovantes pour la résolution de ce problème complexe. Pour appréhender le problème de la comparaison de données incommensurables, nous avons choisi de le regarder comme un problème de plongement de données : chacun des jeux de données est plongé dans un espace commun dans lequel les comparaisons sont possibles. A ces fins, nous avons exploré la projection d un espace de données image sur l espace de données lié à la seconde image et aussi la projection des deux espaces de données dans un troisième espace commun dans lequel les calculs sont conduits. Ceci a été entrepris grâce à l étude des correspondances entre les images dans une base de données images pré-alignées. Dans la poursuite de ces buts, de nouvelles méthodes ont été développées que ce soit pour la régression d images ou pour l apprentissage de métrique multimodale. Les similarités apprises résultantes sont alors incorporées dans une méthode plus globale de recalage basée sur l optimisation discrète qui diminue le besoin d un critère différentiable pour la recherche de solution. Enfin nous explorons une méthode qui permet d éviter le besoin d une base de données pré-alignées en demandant seulement des données annotées (segmentations) par un spécialiste. De nombreuses expériences sont conduites sur deux bases de données complexes (Images d IRM pré-alignées et Images TEP/Scanner) dans le but de justifier les directions prises par nos approches.Even though the prospect of fusing images issued by different medical imagery systems is highly contemplated, the practical instantiation of it is subject to a theoretical hurdle: the definition of a similarity between images. Efforts in this field have proved successful for select pairs of images; however defining a suitable similarity between images regardless of their origin is one of the biggest challenges in deformable registration. In this thesis, we chose to develop generic approaches that allow the comparison of any two given modality. The recent advances in Machine Learning permitted us to provide innovative solutions to this very challenging problem. To tackle the problem of comparing incommensurable data we chose to view it as a data embedding problem where one embeds all the data in a common space in which comparison is possible. To this end, we explored the projection of one image space onto the image space of the other as well as the projection of both image spaces onto a common image space in which the comparison calculations are conducted. This was done by the study of the correspondences between image features in a pre-aligned dataset. In the pursuit of these goals, new methods for image regression as well as multi-modal metric learning methods were developed. The resulting learned similarities are then incorporated into a discrete optimization framework that mitigates the need for a differentiable criterion. Lastly we investigate on a new method that discards the constraint of a database of images that are pre-aligned, only requiring data annotated (segmented) by a physician. Experiments are conducted on two challenging medical images data-sets (Pre-Aligned MRI images and PET/CT images) to justify the benefits of our approach.CHATENAY MALABRY-Ecole centrale (920192301) / SudocSudocFranceF

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc

    X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

    Get PDF
    This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods

    Semantic Assisted, Multiresolution Image Retrieval in 3D Brain MR Volumes

    Get PDF
    Content Based Image Retrieval (CBIR) is an important research area in the field of multimedia information retrieval. The application of CBIR in the medical domain has been attempted before, however the use of CBIR in medical diagnostics is a daunting task. The goal of diagnostic medical image retrieval is to provide diagnostic support by displaying relevant past cases, along with proven pathologies as ground truths. Moreover, medical image retrieval can be extremely useful as a training tool for medical students and residents, follow-up studies, and for research purposes. Despite the presence of an impressive amount of research in the area of CBIR, its acceptance for mainstream and practical applications is quite limited. The research in CBIR has mostly been conducted as an academic pursuit, rather than for providing the solution to a need. For example, many researchers proposed CBIR systems where the image database consists of images belonging to a heterogeneous mixture of man-made objects and natural scenes while ignoring the practical uses of such systems. Furthermore, the intended use of CBIR systems is important in addressing the problem of "Semantic Gap". Indeed, the requirements for the semantics in an image retrieval system for pathological applications are quite different from those intended for training and education. Moreover, many researchers have underestimated the level of accuracy required for a useful and practical image retrieval system. The human eye is extremely dexterous and efficient in visual information processing; consequently, CBIR systems should be highly precise in image retrieval so as to be useful to human users. Unsurprisingly, due to these and other reasons, most of the proposed systems have not found useful real world applications. In this dissertation, an attempt is made to address the challenging problem of developing a retrieval system for medical diagnostics applications. More specifically, a system for semantic retrieval of Magnetic Resonance (MR) images in 3D brain volumes is proposed. The proposed retrieval system has a potential to be useful for clinical experts where the human eye may fail. Previously proposed systems used imprecise segmentation and feature extraction techniques, which are not suitable for precise matching requirements of the image retrieval in this application domain. This dissertation uses multiscale representation for image retrieval, which is robust against noise and MR inhomogeneity. In order to achieve a higher degree of accuracy in the presence of misalignments, an image registration based retrieval framework is developed. Additionally, to speed-up the retrieval system, a fast discrete wavelet based feature space is proposed. Further improvement in speed is achieved by semantically classifying of the human brain into various "Semantic Regions", using an SVM based machine learning approach. A novel and fast identification system is proposed for identifying a 3D volume given a 2D image slice. To this end, we used SVM output probabilities for ranking and identification of patient volumes. The proposed retrieval systems are tested not only for noise conditions but also for healthy and abnormal cases, resulting in promising retrieval performance with respect to multi-modality, accuracy, speed and robustness. This dissertation furnishes medical practitioners with a valuable set of tools for semantic retrieval of 2D images, where the human eye may fail. Specifically, the proposed retrieval algorithms provide medical practitioners with the ability to retrieve 2D MR brain images accurately and monitor the disease progression in various lobes of the human brain, with the capability to monitor the disease progression in multiple patients simultaneously. Additionally, the proposed semantic classification scheme can be extremely useful for semantic based categorization, clustering and annotation of images in MR brain databases. This research framework may evolve in a natural progression towards developing more powerful and robust retrieval systems. It also provides a foundation to researchers in semantic based retrieval systems on how to expand existing toolsets for solving retrieval problems
    • …
    corecore