5,792 research outputs found
Multi-modal Machine Learning in Engineering Design: A Review and Future Directions
In the rapidly advancing field of multi-modal machine learning (MMML), the
convergence of multiple data modalities has the potential to reshape various
applications. This paper presents a comprehensive overview of the current
state, advancements, and challenges of MMML within the sphere of engineering
design. The review begins with a deep dive into five fundamental concepts of
MMML:multi-modal information representation, fusion, alignment, translation,
and co-learning. Following this, we explore the cutting-edge applications of
MMML, placing a particular emphasis on tasks pertinent to engineering design,
such as cross-modal synthesis, multi-modal prediction, and cross-modal
information retrieval. Through this comprehensive overview, we highlight the
inherent challenges in adopting MMML in engineering design, and proffer
potential directions for future research. To spur on the continued evolution of
MMML in engineering design, we advocate for concentrated efforts to construct
extensive multi-modal design datasets, develop effective data-driven MMML
techniques tailored to design applications, and enhance the scalability and
interpretability of MMML models. MMML models, as the next generation of
intelligent design tools, hold a promising future to impact how products are
designed
Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Multimodal machine learning is a vibrant multi-disciplinary research field
that aims to design computer agents with intelligent capabilities such as
understanding, reasoning, and learning through integrating multiple
communicative modalities, including linguistic, acoustic, visual, tactile, and
physiological messages. With the recent interest in video understanding,
embodied autonomous agents, text-to-image generation, and multisensor fusion in
application domains such as healthcare and robotics, multimodal machine
learning has brought unique computational and theoretical challenges to the
machine learning community given the heterogeneity of data sources and the
interconnections often found between modalities. However, the breadth of
progress in multimodal research has made it difficult to identify the common
themes and open questions in the field. By synthesizing a broad range of
application domains and theoretical frameworks from both historical and recent
perspectives, this paper is designed to provide an overview of the
computational and theoretical foundations of multimodal machine learning. We
start by defining two key principles of modality heterogeneity and
interconnections that have driven subsequent innovations, and propose a
taxonomy of 6 core technical challenges: representation, alignment, reasoning,
generation, transference, and quantification covering historical and recent
trends. Recent technical achievements will be presented through the lens of
this taxonomy, allowing researchers to understand the similarities and
differences across new approaches. We end by motivating several open problems
for future research as identified by our taxonomy
SimMMDG: A Simple and Effective Framework for Multi-modal Domain Generalization
In real-world scenarios, achieving domain generalization (DG) presents
significant challenges as models are required to generalize to unknown target
distributions. Generalizing to unseen multi-modal distributions poses even
greater difficulties due to the distinct properties exhibited by different
modalities. To overcome the challenges of achieving domain generalization in
multi-modal scenarios, we propose SimMMDG, a simple yet effective multi-modal
DG framework. We argue that mapping features from different modalities into the
same embedding space impedes model generalization. To address this, we propose
splitting the features within each modality into modality-specific and
modality-shared components. We employ supervised contrastive learning on the
modality-shared features to ensure they possess joint properties and impose
distance constraints on modality-specific features to promote diversity. In
addition, we introduce a cross-modal translation module to regularize the
learned features, which can also be used for missing-modality generalization.
We demonstrate that our framework is theoretically well-supported and achieves
strong performance in multi-modal DG on the EPIC-Kitchens dataset and the novel
Human-Animal-Cartoon (HAC) dataset introduced in this paper. Our source code
and HAC dataset are available at https://github.com/donghao51/SimMMDG.Comment: NeurIPS 202
New ideas and trends in deep multimodal content understanding: a review
The focus of this survey is on the analysis of two modalities of multimodal deep learning: image and text. Unlike classic reviews of deep learning where monomodal image classifiers such as VGG, ResNet and Inception module are central topics, this paper will examine recent multimodal deep models and structures, including auto-encoders, generative adversarial nets and their variants. These models go beyond the simple image classifiers in which they can do uni-directional (e.g. image captioning, image generation) and bi-directional (e.g. cross-modal retrieval, visual question answering) multimodal tasks. Besides, we analyze two aspects of the challenge in terms of better content understanding in deep multimodal applications. We then introduce current ideas and trends in deep multimodal feature learning, such as feature embedding approaches and objective function design, which are crucial in overcoming the aforementioned challenges. Finally, we include several promising directions for future research.Computer Systems, Imagery and Medi
Productivity Measurement of Call Centre Agents using a Multimodal Classification Approach
Call centre channels play a cornerstone role in business communications and transactions, especially in challenging business situations. Operations’ efficiency, service quality, and resource productivity are core aspects of call centres’ competitive advantage in rapid market competition. Performance evaluation in call centres is challenging due to human subjective evaluation, manual assortment to massive calls, and inequality in evaluations because of different raters. These challenges impact these operations' efficiency and lead to frustrated customers. This study aims to automate performance evaluation in call centres using various deep learning approaches. Calls recorded in a call centre are modelled and classified into high- or low-performance evaluations categorised as productive or nonproductive calls.
The proposed conceptual model considers a deep learning network approach to model the recorded calls as text and speech. It is based on the following: 1) focus on the technical part of agent performance, 2) objective evaluation of the corpus, 3) extension of features for both text and speech, and 4) combination of the best accuracy from text and speech data using a multimodal structure. Accordingly, the diarisation algorithm extracts that part of the call where the agent is talking from which the customer is doing so. Manual annotation is also necessary to divide the modelling corpus into productive and nonproductive (supervised training). Krippendorff’s alpha was applied to avoid subjectivity in the manual annotation. Arabic speech recognition is then developed to transcribe the speech into text. The text features are the words embedded using the embedding layer. The speech features make several attempts to use the Mel Frequency Cepstral Coefficient (MFCC) upgraded with Low-Level Descriptors (LLD) to improve classification accuracy. The data modelling architectures for speech and text are based on CNNs, BiLSTMs, and the attention layer. The multimodal approach follows the generated models to improve performance accuracy by concatenating the text and speech models using the joint representation methodology.
The main contributions of this thesis are:
• Developing an Arabic Speech recognition method for automatic transcription of speech into text.
• Drawing several DNN architectures to improve performance evaluation using speech features based on MFCC and LLD.
• Developing a Max Weight Similarity (MWS) function to outperform the SoftMax function used in the attention layer.
• Proposing a multimodal approach for combining the text and speech models for best performance evaluation
Recommended from our members
The Challenge of Spoken Language Systems: Research Directions for the Nineties
A spoken language system combines speech recognition, natural language processing and human interface technology. It functions by recognizing the person's words, interpreting the sequence of words to obtain a meaning in terms of the application, and providing an appropriate response back to the user. Potential applications of spoken language systems range from simple tasks, such as retrieving information from an existing database (traffic reports, airline schedules), to interactive problem solving tasks involving complex planning and reasoning (travel planning, traffic routing), to support for multilingual interactions. We examine eight key areas in which basic research is needed to produce spoken language systems: (1) robust speech recognition; (2) automatic training and adaptation; (3) spontaneous speech; (4) dialogue models; (5) natural language response generation; (6) speech synthesis and speech generation; (7) multilingual systems; and (8) interactive multimodal systems. In each area, we identify key research challenges, the infrastructure needed to support research, and the expected benefits. We conclude by reviewing the need for multidisciplinary research, for development of shared corpora and related resources, for computational support and far rapid communication among researchers. The successful development of this technology will increase accessibility of computers to a wide range of users, will facilitate multinational communication and trade, and will create new research specialties and jobs in this rapidly expanding area
Joint Session-Item Encoding for Session-Based Recommendation: A Metric- Learning Approach with Temporal Smoothing
In recommendation systems, a system is in charge of providing relevant recommendations
towards users with either a clear target in mind or a mere vague mental representation.
Session-based recommendation targets a specific scenario in recommendation systems,
where users are anonymous. Thus the recommendation system must work under more
challenging conditions, having only the current session to extract any user preferences to
provide recommendations.
This setting requires a model capable of understanding and relating different inter-
actions across different sessions involving different items. This dissertation reflects such
relationships on a commonly learned space for sessions and items. Such space is built
using metric-learning, which can capture such relationships and build such space, where
the distances between the elements (session and item embeddings) reflect how they relate
to each other. We then use this learned space as the intermediary to provide relevant rec-
ommendations. This work continues and extends on top of other relevant work showing
the potential of metric-learning addressed to the session-based recommendation field.
This dissertation proposes three significant contributions: (i) propose a novel joint
session-item encoding model with temporal smoothing, with fewer parameters and the
inclusion of temporal characteristics in learning (temporal proximity and temporal re-
cency); (ii) enhanced recommendation performance surpassing other state-of-the-art
metric-learning models for session-based recommendation; (iii) a thorough critical analy-
sis, addressing and raising awareness to common problems in the field of session-based
recommendation, discussing the reasons behind them and their impact on model perfor-
mance.Em sistemas de recomendação, um sistema fica encarregue de fornecer recomendações
relevantes aos seus utilizadores que podem ter, ou uma ideia concreta daquilo que pre-
tendem ou apenas uma vaga representação mental. Recomendação com base na sessão
dirige-se principalmente a um cenário específico de sistemas de recomendação, onde
os utilizadores são anónimos. Ou seja, estes sistemas têm de ser capazes de funcionar
em condições mais desfavoráveis, tendo apenas a sessão atual disponível como input do
utilizador para efetuar recomendações.
Este contexto requer um modelo capaz de perceber e relacionar diferentes interações
ao longo de várias outras sessões envolvendo diferentes itens. Esta dissertação reflete
tais interações por via de um espaço comum, que é aprendido, para representar sessões e
itens. Este espaço é construído usando metric-learning, técnica que consegue capturar tais
relações e construir o espaço em questão, no qual a distância entre os vários elementos
(embeddings de sessões e itens) reflete como estes se relacionam entre si. Usamos este
espaço, que foi aprendido, como intermediário no fornecimento de recomendações rele-
vantes. Este trabalho continua e extende para além de outros trabalhos relevantes na área
que mostraram o potencial de aplicar metric-learning para o domínio de recomendação
com base na sessão.
Esta dissertação propõe as seguintes três principais e significativas contribuições: (i)
propõe um novo modelo de codificação sessão-item conjunto com suavização temporal,
com menos parâmetros e com a inclusão de características temporais no processo de
aprendizagem (proximidade temporal e recência); (ii) um desempenho de recomenda-
ção melhorado que ultrapassa outros métodos do estado-da-arte que utilizam técnicas
de metric-learning para sistemas de recomendação com base na sessão; (iii) uma análise
cuidada, que foca e tenta destacar alguns erros comuns neste campo de sistemas de re-
comendação com base na sessão, discutindo as razões por detrás de tais erros e o seu
impacto no desempenho dos modelos
Recommended from our members
The Challenge of Spoken Language Systems: Research Directions for the Nineties
A spoken language system combines speech recognition, natural language processing and human interface technology. It functions by recognizing the person's words, interpreting the sequence of words to obtain a meaning in terms of the application, and providing an appropriate response back to the user. Potential applications of spoken language systems range from simple tasks, such as retrieving information from an existing database (traffic reports, airline schedules), to interactive problem solving tasks involving complex planning and reasoning (travel planning, traffic routing), to support for multilingual interactions. We examine eight key areas in which basic research is needed to produce spoken language systems: (1) robust speech recognition; (2) automatic training and adaptation; (3) spontaneous speech; (4) dialogue models; (5) natural language response generation; (6) speech synthesis and speech generation; (7) multilingual systems; and (8) interactive multimodal systems. In each area, we identify key research challenges, the infrastructure needed to support research, and the expected benefits. We conclude by reviewing the need for multidisciplinary research, for development of shared corpora and related resources, for computational support and far rapid communication among researchers. The successful development of this technology will increase accessibility of computers to a wide range of users, will facilitate multinational communication and trade, and will create new research specialties and jobs in this rapidly expanding area
Multimodal machine learning in medical screenings
The healthcare industry, with its high demand and standards, has long been considered a crucial area for technology-based innovation. However, the medical field often relies on experience-based evaluation. Limited resources, overloading capacity, and a lack of accessibility can hinder timely medical care and diagnosis delivery. In light of these challenges, automated medical screening as a decision-making aid is highly recommended. With the increasing availability of data and the need to explore the complementary effect among modalities, multimodal machine learning has emerged as a potential area of technology. Its impact has been witnessed across a wide range of domains, prompting the question of how far machine learning can be leveraged to automate processes in even more complex and high-risk sectors.
This paper delves into the realm of multimodal machine learning in the field of automated medical screening and evaluates the potential of this area of study in mental disorder detection, a highly important area of healthcare. First, we conduct a scoping review targeted at high-impact papers to highlight the trends and directions of multimodal machine learning in screening prevalent mental disorders such as depression, stress, and bipolar disorder. The review provides a comprehensive list of popular datasets and extensively studied modalities. The review also proposes an end-to-end pipeline for multimodal machine learning applications, covering essential steps from preprocessing, representation, and fusion, to modelling and evaluation. While cross-modality interaction has been considered a promising factor to leverage fusion among multimodalities, the number of existing multimodal fusion methods employing this mechanism is rather limited. This study investigates multimodal fusion in more detail through the proposal of Autofusion, an autoencoder-infused fusion technique that harnesses the cross-modality interaction among different modalities. The technique is evaluated on DementiaBank’s Pitt corpus to detect Alzheimer’s disease, leveraging the power of cross-modality interaction. Autofusion achieves a promising performance of 79.89% in accuracy, 83.85% in recall, 81.72% in precision, and 82.47% in F1. The technique consistently outperforms all unimodal methods by an average of 5.24% across all metrics. Our method consistently outperforms early fusion and late fusion. Especially against the late fusion hard-voting technique, our method outperforms by an average of 20% across all metrics. Further, empirical results show that the cross-modality interaction term enhances the model performance by 2-3% across metrics. This research highlights the promising impact of cross-modality interaction in multimodal machine learning and calls for further research to unlock its full potential
- …