4,866 research outputs found
Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples
Machine Learning has been a big success story during the AI resurgence. One
particular stand out success relates to learning from a massive amount of data.
In spite of early assertions of the unreasonable effectiveness of data, there
is increasing recognition for utilizing knowledge whenever it is available or
can be created purposefully. In this paper, we discuss the indispensable role
of knowledge for deeper understanding of content where (i) large amounts of
training data are unavailable, (ii) the objects to be recognized are complex,
(e.g., implicit entities and highly subjective content), and (iii) applications
need to use complementary or related data in multiple modalities/media. What
brings us to the cusp of rapid progress is our ability to (a) create relevant
and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP
techniques. Using diverse examples, we seek to foretell unprecedented progress
in our ability for deeper understanding and exploitation of multimodal data and
continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International
Conference on Web Intelligence (WI). arXiv admin note: substantial text
overlap with arXiv:1610.0770
Recommended from our members
Emotional Biosensing: Exploring Critical Alternatives
Emotional biosensing is rising in daily life: Data and categories claim to know how people feel and suggest what they should do about it, while CSCW explores new biosensing possibilities. Prevalent approaches to emotional biosensing are too limited, focusing on the individual, optimization, and normative categorization. Conceptual shifts can help explore alternatives: toward materiality, from representation toward performativity, inter-action to intra-action, shifting biopolitics, and shifting affect/desire. We contribute (1) synthesizing wide-ranging conceptual lenses, providing analysis connecting them to emotional biosensing design, (2) analyzing selected design exemplars to apply these lenses to design research, and (3) offering our own recommendations for designers and design researchers. In particular we suggest humility in knowledge claims with emotional biosensing, prioritizing care and affirmation over self- improvement, and exploring alternative desires. We call for critically questioning and generatively re- imagining the role of data in configuring sensing, feeling, ‘the good life,’ and everyday experience
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities
Overcoming foreign language anxiety in an emotionally intelligent tutoring system
Learning a foreign language entails cognitive and emotional obstacles. It involves complicated mental processes that affect learning and emotions. Positive emotions such as motivation, encouragement, and satisfaction increase learning achievement, while negative emotions like anxiety, frustration, and confusion may reduce performance. Foreign Language Anxiety (FLA) is a specific type of anxiety accompanying learning a foreign language. It is considered a main impediment that hinders learning, reduces achievements, and diminishes interest in learning.
Detecting FLA is the first step toward reducing and eventually overcoming it. Previously, researchers have been detecting FLA using physical measurements and self-reports. Using physical measures is direct and less regulated by the learner, but it is uncomfortable and requires the learner to be in the lab. Employing self-reports is scalable because it is easy to administer in the lab and online. However, it interrupts the learning flow, and people sometimes respond inaccurately. Using sensor-free human behavioral metrics is a scalable and practical measurement because it is feasible online or in class with minimum adjustments.
To overcome FLA, researchers have studied the use of robots, games, or intelligent tutoring systems (ITS). Within these technologies, they applied soothing music, difficulty reduction, or storytelling. These methods lessened FLA but had limitations such as distracting the learner, not improving performance, and producing cognitive overload. Using an animated agent that provides motivational supportive feedback could reduce FLA and increase learning.
It is necessary to measure FLA effectively with minimal interruption and then successfully reduce it. In the context of an e-learning system, I investigated ways to detect FLA using sensor-free human behavioral metrics. This scalable and practical method allows us to recognize FLA without being obtrusive. To reduce FLA, I studied applying emotionally adaptive feedback that offers motivational supportive feedback by an animated agent
Towards Video Transformers for Automatic Human Analysis
[eng] With the aim of creating artificial systems capable of mirroring the nuanced understanding and interpretative powers inherent to human cognition, this thesis embarks on an exploration of the intersection between human analysis and Video Transformers. The objective is to harness the potential of Transformers, a promising architectural paradigm, to comprehend the intricacies of human interaction, thus paving the way for the development of empathetic and context-aware intelligent systems. In order to do so, we explore the whole Computer Vision pipeline, from data gathering, to deeply analyzing recent developments, through model design and experimentation.
Central to this study is the creation of UDIVA, an expansive multi-modal, multi-view dataset capturing dyadic face-to-face human interactions. Comprising 147 participants across 188 sessions, UDIVA integrates audio-visual recordings, heart-rate measurements, personality assessments, socio- demographic metadata, and conversational transcripts, establishing itself as the largest dataset for dyadic human interaction analysis up to this date. This dataset provides a rich context for probing the capabilities of Transformers within complex environments. In order to validate its utility, as well as to elucidate Transformers' ability to assimilate diverse contextual cues, we focus on addressing the challenge of personality regression within interaction scenarios. We first adapt an existing Video Transformer to handle multiple contextual sources and conduct rigorous experimentation. We empirically observe a progressive enhancement in model performance as more context is added, reinforcing the potential of Transformers to decode intricate human dynamics. Building upon these findings, the Dyadformer emerges as a novel architecture, adept at long-range modeling of dyadic interactions. By jointly modeling both participants in the interaction, as well as embedding multi- modal integration into the model itself, the Dyadformer surpasses the baseline and other concurrent approaches, underscoring Transformers' aptitude in deciphering multifaceted, noisy, and challenging tasks such as the analysis of human personality in interaction.
Nonetheless, these experiments unveil the ubiquitous challenges when training Transformers, particularly in managing overfitting due to their demand for extensive datasets. Consequently, we conclude this thesis with a comprehensive investigation into Video Transformers, analyzing topics ranging from architectural designs and training strategies, to input embedding and tokenization, traversing through multi-modality and specific applications. Across these, we highlight trends which optimally harness spatio-temporal representations that handle video redundancy and high dimensionality. A culminating performance comparison is conducted in the realm of video action classification, spotlighting strategies that exhibit superior efficacy, even compared to traditional CNN-based methods.[cat] Aquesta tesi busca crear sistemes artificials que reflecteixin les habilitats de comprensió i interpretació humanes a través de l'ús de Transformers per a vÃdeo. L'objectiu és utilitzar aquestes arquitectures per comprendre millor la interacció humana i desenvolupar sistemes intel·ligents i conscients de l'entorn. Això implica explorar à mplies à rees de la Visió per Computador, des de la recopilació de dades fins a l'anà lisi de l'estat de l'art i la prova experimental d'aquests models.
Una part essencial d'aquest estudi és la creació d'UDIVA, un ampli conjunt de dades multimodal i multivista que enregistra interaccions humanes cara a cara. Amb 147 participants i 188 sessions, UDIVA inclou contingut audiovisual, freqüència cardÃaca, perfils de personalitat, dades sociodemogrà fiques i transcripcions de les converses. És el conjunt de dades més gran conegut per a l'anà lisi de la interacció humana dià dica i proporciona un context ric per a l'estudi de les capacitats dels Transformers en entorns complexos. Per tal de validar la seva utilitat i les habilitats dels Transformers, ens centrem en la regressió de la personalitat. Inicialment, adaptem un Transformer de vÃdeo per integrar diverses fonts de context. Mitjançant experiments exhaustius, observem millores progressives en els resultats amb la inclusió de més context, confirmant la capacitat dels Transformers. Motivats per aquests resultats, desenvolupem el Dyadformer, una arquitectura per interaccions dià diques de llarga duració. Aquesta nova arquitectura considera simultà niament els dos participants en la interacció i incorpora la multimodalitat en un sol model. El Dyadformer supera la nostra proposta inicial i altres treballs similars, destacant la capacitat dels Transformers per abordar tasques complexes.
No obstant això, aquestos experiments revelen reptes d'entrenament dels Transformers, com el sobreajustament, per la seva necessitat de grans conjunts de dades. La tesi conclou amb una anà lisi profunda dels Transformers per a vÃdeo, incloent dissenys arquitectònics, estratègies d'entrenament, preprocessament de vÃdeos, tokenització i multimodalitat. S'identifiquen tendències per gestionar la redundà ncia i alta dimensionalitat de vÃdeos i es realitza una comparació de rendiment en la classificació d'accions a vÃdeo, destacant estratègies d'eficà cia superior als mètodes tradicionals basats en convolucions
- …