4,362 research outputs found
A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews
Despite the recent advances in opinion mining for written reviews, few works
have tackled the problem on other sources of reviews. In light of this issue,
we propose a multi-modal approach for mining fine-grained opinions from video
reviews that is able to determine the aspects of the item under review that are
being discussed and the sentiment orientation towards them. Our approach works
at the sentence level without the need for time annotations and uses features
derived from the audio, video and language transcriptions of its contents. We
evaluate our approach on two datasets and show that leveraging the video and
audio modalities consistently provides increased performance over text-only
baselines, providing evidence these extra modalities are key in better
understanding video reviews.Comment: Second Grand Challenge and Workshop on Multimodal Language ACL 202
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
Large foundation models can exhibit unique capabilities depending on the
domain of data they are trained on. While these domains are generic, they may
only barely overlap. For example, visual-language models (VLMs) are trained on
Internet-scale image captions, but large language models (LMs) are further
trained on Internet-scale text with no images (e.g. from spreadsheets, to SAT
questions). As a result, these models store different forms of commonsense
knowledge across different domains. In this work, we show that this model
diversity is symbiotic, and can be leveraged to build AI systems with
structured Socratic dialogue -- in which new multimodal tasks are formulated as
a guided language-based exchange between different pre-existing foundation
models, without additional finetuning. In the context of egocentric perception,
we present a case study of Socratic Models (SMs) that can provide meaningful
results for complex tasks such as generating free-form answers to contextual
questions about egocentric video, by formulating video Q&A as short story Q&A,
i.e. summarizing the video into a short story, then answering questions about
it. Additionally, SMs can generate captions for Internet images, and are
competitive with state-of-the-art on zero-shot video-to-text retrieval with
42.8 R@1 on MSR-VTT 1k-A. SMs demonstrate how to compose foundation models
zero-shot to capture new multimodal functionalities, without domain-specific
data collection. Prototypes are available at socraticmodels.github.io.Comment: https://socraticmodels.github.io
- …