1,886 research outputs found
Exploiting multimedia in creating and analysing multimedia Web archives
The data contained on the web and the social web are inherently multimedia and consist of a mixture of textual, visual and audio modalities. Community memories embodied on the web and social web contain a rich mixture of data from these modalities. In many ways, the web is the greatest resource ever created by human-kind. However, due to the dynamic and distributed nature of the web, its content changes, appears and disappears on a daily basis. Web archiving provides a way of capturing snapshots of (parts of) the web for preservation and future analysis. This paper provides an overview of techniques we have developed within the context of the EU funded ARCOMEM (ARchiving COmmunity MEMories) project to allow multimedia web content to be leveraged during the archival process and for post-archival analysis. Through a set of use cases, we explore several practical applications of multimedia analytics within the realm of web archiving, web archive analysis and multimedia data on the web in general
Econometrics meets sentiment : an overview of methodology and applications
The advent of massive amounts of textual, audio, and visual data has spurred the development of econometric methodology to transform qualitative sentiment data into quantitative sentiment variables, and to use those variables in an econometric analysis of the relationships between sentiment and other variables. We survey this emerging research field and refer to it as sentometrics, which is a portmanteau of sentiment and econometrics. We provide a synthesis of the relevant methodological approaches, illustrate with empirical results, and discuss useful software
MM-BigBench: Evaluating Multimodal Models on Multimodal Content Comprehension Tasks
The popularity of multimodal large language models (MLLMs) has triggered a
recent surge in research efforts dedicated to evaluating these models.
Nevertheless, existing evaluation studies of MLLMs primarily focus on the
comprehension and reasoning of unimodal (vision) content, neglecting
performance evaluations in the domain of multimodal (vision-language) content
understanding. Beyond multimodal reasoning, tasks related to multimodal content
comprehension necessitate a profound understanding of multimodal contexts,
achieved through the multimodal interaction to obtain a final answer. In this
paper, we introduce a comprehensive assessment framework called MM-BigBench,
which incorporates a diverse range of metrics to offer an extensive evaluation
of the performance of various models and instructions across a wide spectrum of
diverse multimodal content comprehension tasks. Consequently, our work
complements research on the performance of MLLMs in multimodal comprehension
tasks, achieving a more comprehensive and holistic evaluation of MLLMs. To
begin, we employ the Best Performance metric to ascertain each model's
performance upper bound on different datasets. Subsequently, the Mean Relative
Gain metric offers an assessment of the overall performance of various models
and instructions, while the Stability metric measures their sensitivity.
Furthermore, previous research centers on evaluating models independently or
solely assessing instructions, neglecting the adaptability between models and
instructions. We propose the Adaptability metric to quantify the adaptability
between models and instructions. Our paper evaluates a total of 20 language
models (14 MLLMs) on 14 multimodal datasets spanning 6 tasks, with 10
instructions for each task, and derives novel insights. Our code will be
released at https://github.com/declare-lab/MM-BigBench.Comment: Undervie
Benchmarking Arabic AI with Large Language Models
With large Foundation Models (FMs), language technologies (AI in general) are
entering a new paradigm: eliminating the need for developing large-scale
task-specific datasets and supporting a variety of tasks through set-ups
ranging from zero-shot to few-shot learning. However, understanding FMs
capabilities requires a systematic benchmarking effort by comparing FMs
performance with the state-of-the-art (SOTA) task-specific models. With that
goal, past work focused on the English language and included a few efforts with
multiple languages. Our study contributes to ongoing research by evaluating FMs
performance for standard Arabic NLP and Speech processing, including a range of
tasks from sequence tagging to content classification across diverse domains.
We start with zero-shot learning using GPT-3.5-turbo, Whisper, and USM,
addressing 33 unique tasks using 59 publicly available datasets resulting in 96
test setups. For a few tasks, FMs performs on par or exceeds the performance of
the SOTA models but for the majority it under-performs. Given the importance of
prompt for the FMs performance, we discuss our prompt strategies in detail and
elaborate on our findings. Our future work on Arabic AI will explore few-shot
prompting, expand the range of tasks, and investigate additional open-source
models.Comment: Foundation Models, Large Language Models, Arabic NLP, Arabic Speech,
Arabic AI, , CHatGPT Evaluation, USM Evaluation, Whisper Evaluatio
- âŠ