2,400 research outputs found
MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System
Multi-modal sarcasm detection has attracted much recent attention.
Nevertheless, the existing benchmark (MMSD) has some shortcomings that hinder
the development of reliable multi-modal sarcasm detection system: (1) There are
some spurious cues in MMSD, leading to the model bias learning; (2) The
negative samples in MMSD are not always reasonable. To solve the aforementioned
issues, we introduce MMSD2.0, a correction dataset that fixes the shortcomings
of MMSD, by removing the spurious cues and re-annotating the unreasonable
samples. Meanwhile, we present a novel framework called multi-view CLIP that is
capable of leveraging multi-grained cues from multiple perspectives (i.e.,
text, image, and text-image interaction view) for multi-modal sarcasm
detection. Extensive experiments show that MMSD2.0 is a valuable benchmark for
building reliable multi-modal sarcasm detection systems and multi-view CLIP can
significantly outperform the previous best baselines.Comment: Accepted by ACL2023 Finding
Semantic multimedia analysis using knowledge and context
PhDThe difficulty of semantic multimedia analysis can be attributed to the
extended diversity in form and appearance exhibited by the majority of
semantic concepts and the difficulty to express them using a finite number
of patterns. In meeting this challenge there has been a scientific debate
on whether the problem should be addressed from the perspective of using
overwhelming amounts of training data to capture all possible instantiations
of a concept, or from the perspective of using explicit knowledge about
the concepts’ relations to infer their presence. In this thesis we address
three problems of pattern recognition and propose solutions that combine
the knowledge extracted implicitly from training data with the knowledge
provided explicitly in structured form. First, we propose a BNs modeling
approach that defines a conceptual space where both domain related evi-
dence and evidence derived from content analysis can be jointly considered
to support or disprove a hypothesis. The use of this space leads to sig-
nificant gains in performance compared to analysis methods that can not
handle combined knowledge. Then, we present an unsupervised method
that exploits the collective nature of social media to automatically obtain
large amounts of annotated image regions. By proving that the quality of
the obtained samples can be almost as good as manually annotated images
when working with large datasets, we significantly contribute towards scal-
able object detection. Finally, we introduce a method that treats images,
visual features and tags as the three observable variables of an aspect model
and extracts a set of latent topics that incorporates the semantics of both
visual and tag information space. By showing that the cross-modal depen-
dencies of tagged images can be exploited to increase the semantic capacity
of the resulting space, we advocate the use of all existing information facets
in the semantic analysis of social media
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Annotation-free Audio-Visual Segmentation
The objective of Audio-Visual Segmentation (AVS) is to locate sounding
objects within visual scenes by accurately predicting pixelwise segmentation
masks. In this paper, we present the following contributions: (i), we propose a
scalable and annotation-free pipeline for generating artificial data for the
AVS task. We leverage existing image segmentation and audio datasets to draw
links between category labels, image-mask pairs, and audio samples, which
allows us to easily compose (image, audio, mask) triplets for training AVS
models; (ii), we introduce a novel Audio-Aware Transformer (AuTR) architecture
that features an audio-aware query-based transformer decoder. This architecture
enables the model to search for sounding objects with the guidance of audio
signals, resulting in more accurate segmentation; (iii), we present extensive
experiments conducted on both synthetic and real datasets, which demonstrate
the effectiveness of training AVS models with synthetic data generated by our
proposed pipeline. Additionally, our proposed AuTR architecture exhibits
superior performance and strong generalization ability on public benchmarks.
The project page is https://jinxiang-liu.github.io/anno-free-AVS/.Comment: Under Revie
M3PT: A Multi-Modal Model for POI Tagging
POI tagging aims to annotate a point of interest (POI) with some informative
tags, which facilitates many services related to POIs, including search,
recommendation, and so on. Most of the existing solutions neglect the
significance of POI images and seldom fuse the textual and visual features of
POIs, resulting in suboptimal tagging performance. In this paper, we propose a
novel Multi-Modal Model for POI Tagging, namely M3PT, which achieves enhanced
POI tagging through fusing the target POI's textual and visual features, and
the precise matching between the multi-modal representations. Specifically, we
first devise a domain-adaptive image encoder (DIE) to obtain the image
embeddings aligned to their gold tags' semantics. Then, in M3PT's text-image
fusion module (TIF), the textual and visual representations are fully fused
into the POIs' content embeddings for the subsequent matching. In addition, we
adopt a contrastive learning strategy to further bridge the gap between the
representations of different modalities. To evaluate the tagging models'
performance, we have constructed two high-quality POI tagging datasets from the
real-world business scenario of Ali Fliggy. Upon the datasets, we conducted the
extensive experiments to demonstrate our model's advantage over the baselines
of uni-modality and multi-modality, and verify the effectiveness of important
components in M3PT, including DIE, TIF and the contrastive learning strategy.Comment: Accepted by KDD 202
CMNER: A Chinese Multimodal NER Dataset based on Social Media
Multimodal Named Entity Recognition (MNER) is a pivotal task designed to
extract named entities from text with the support of pertinent images.
Nonetheless, a notable paucity of data for Chinese MNER has considerably
impeded the progress of this natural language processing task within the
Chinese domain. Consequently, in this study, we compile a Chinese Multimodal
NER dataset (CMNER) utilizing data sourced from Weibo, China's largest social
media platform. Our dataset encompasses 5,000 Weibo posts paired with 18,326
corresponding images. The entities are classified into four distinct
categories: person, location, organization, and miscellaneous. We perform
baseline experiments on CMNER, and the outcomes underscore the effectiveness of
incorporating images for NER. Furthermore, we conduct cross-lingual experiments
on the publicly available English MNER dataset (Twitter2015), and the results
substantiate our hypothesis that Chinese and English multimodal NER data can
mutually enhance the performance of the NER model
Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Multimodal machine learning is a vibrant multi-disciplinary research field
that aims to design computer agents with intelligent capabilities such as
understanding, reasoning, and learning through integrating multiple
communicative modalities, including linguistic, acoustic, visual, tactile, and
physiological messages. With the recent interest in video understanding,
embodied autonomous agents, text-to-image generation, and multisensor fusion in
application domains such as healthcare and robotics, multimodal machine
learning has brought unique computational and theoretical challenges to the
machine learning community given the heterogeneity of data sources and the
interconnections often found between modalities. However, the breadth of
progress in multimodal research has made it difficult to identify the common
themes and open questions in the field. By synthesizing a broad range of
application domains and theoretical frameworks from both historical and recent
perspectives, this paper is designed to provide an overview of the
computational and theoretical foundations of multimodal machine learning. We
start by defining two key principles of modality heterogeneity and
interconnections that have driven subsequent innovations, and propose a
taxonomy of 6 core technical challenges: representation, alignment, reasoning,
generation, transference, and quantification covering historical and recent
trends. Recent technical achievements will be presented through the lens of
this taxonomy, allowing researchers to understand the similarities and
differences across new approaches. We end by motivating several open problems
for future research as identified by our taxonomy
- …