6,697 research outputs found
CMFN: Cross-Modal Fusion Network for Irregular Scene Text Recognition
Scene text recognition, as a cross-modal task involving vision and text, is
an important research topic in computer vision. Most existing methods use
language models to extract semantic information for optimizing visual
recognition. However, the guidance of visual cues is ignored in the process of
semantic mining, which limits the performance of the algorithm in recognizing
irregular scene text. To tackle this issue, we propose a novel cross-modal
fusion network (CMFN) for irregular scene text recognition, which incorporates
visual cues into the semantic mining process. Specifically, CMFN consists of a
position self-enhanced encoder, a visual recognition branch and an iterative
semantic recognition branch. The position self-enhanced encoder provides
character sequence position encoding for both the visual recognition branch and
the iterative semantic recognition branch. The visual recognition branch
carries out visual recognition based on the visual features extracted by CNN
and the position encoding information provided by the position self-enhanced
encoder. The iterative semantic recognition branch, which consists of a
language recognition module and a cross-modal fusion gate, simulates the way
that human recognizes scene text and integrates cross-modal visual cues for
text recognition. The experiments demonstrate that the proposed CMFN algorithm
achieves comparable performance to state-of-the-art algorithms, indicating its
effectiveness.Comment: Accepted to ICONIP 202
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Logographic Information Aids Learning Better Representations for Natural Language Inference
Statistical language models conventionally implement representation learning
based on the contextual distribution of words or other formal units, whereas
any information related to the logographic features of written text are often
ignored, assuming they should be retrieved relying on the cooccurence
statistics. On the other hand, as language models become larger and require
more data to learn reliable representations, such assumptions may start to fall
back, especially under conditions of data sparsity. Many languages, including
Chinese and Vietnamese, use logographic writing systems where surface forms are
represented as a visual organization of smaller graphemic units, which often
contain many semantic cues. In this paper, we present a novel study which
explores the benefits of providing language models with logographic information
in learning better semantic representations. We test our hypothesis in the
natural language inference (NLI) task by evaluating the benefit of computing
multi-modal representations that combine contextual information with glyph
information. Our evaluation results in six languages with different typology
and writing systems suggest significant benefits of using multi-modal
embeddings in languages with logograhic systems, especially for words with less
occurence statistics.Comment: accepted by aacl finding
Continuous User Authentication Using Multi-Modal Biometrics
It is commonly acknowledged that mobile devices now form an integral part of an individual’s everyday life. The modern mobile handheld devices are capable to provide a wide range of services and applications over multiple networks. With the increasing capability and accessibility, they introduce additional demands in term of security.
This thesis explores the need for authentication on mobile devices and proposes a novel mechanism to improve the current techniques. The research begins with an intensive review of mobile technologies and the current security challenges that mobile devices experience to illustrate the imperative of authentication on mobile devices. The research then highlights the existing authentication mechanism and a wide range of weakness. To this end, biometric approaches are identified as an appropriate solution an opportunity for security to be maintained beyond point-of-entry. Indeed, by utilising behaviour biometric techniques, the authentication mechanism can be performed in a continuous and transparent fashion.
This research investigated three behavioural biometric techniques based on SMS texting activities and messages, looking to apply these techniques as a multi-modal biometric authentication method for mobile devices. The results showed that linguistic profiling; keystroke dynamics and behaviour profiling can be used to discriminate users with overall Equal Error Rates (EER) 12.8%, 20.8% and 9.2% respectively. By using a combination of biometrics, the results showed clearly that the classification performance is better than using single biometric technique achieving EER 3.3%. Based on these findings, a novel architecture of multi-modal biometric authentication on mobile devices is proposed. The framework is able to provide a robust, continuous and transparent authentication in standalone and server-client modes regardless of mobile hardware configuration. The framework is able to continuously maintain the security status of the devices. With a high level of security status, users are permitted to access sensitive services and data. On the other hand, with the low level of security, users are required to re-authenticate before accessing sensitive service or data
A Survey on Deep Multi-modal Learning for Body Language Recognition and Generation
Body language (BL) refers to the non-verbal communication expressed through
physical movements, gestures, facial expressions, and postures. It is a form of
communication that conveys information, emotions, attitudes, and intentions
without the use of spoken or written words. It plays a crucial role in
interpersonal interactions and can complement or even override verbal
communication. Deep multi-modal learning techniques have shown promise in
understanding and analyzing these diverse aspects of BL. The survey emphasizes
their applications to BL generation and recognition. Several common BLs are
considered i.e., Sign Language (SL), Cued Speech (CS), Co-speech (CoS), and
Talking Head (TH), and we have conducted an analysis and established the
connections among these four BL for the first time. Their generation and
recognition often involve multi-modal approaches. Benchmark datasets for BL
research are well collected and organized, along with the evaluation of SOTA
methods on these datasets. The survey highlights challenges such as limited
labeled data, multi-modal learning, and the need for domain adaptation to
generalize models to unseen speakers or languages. Future research directions
are presented, including exploring self-supervised learning techniques,
integrating contextual information from other modalities, and exploiting
large-scale pre-trained multi-modal models. In summary, this survey paper
provides a comprehensive understanding of deep multi-modal learning for various
BL generations and recognitions for the first time. By analyzing advancements,
challenges, and future directions, it serves as a valuable resource for
researchers and practitioners in advancing this field. n addition, we maintain
a continuously updated paper list for deep multi-modal learning for BL
recognition and generation: https://github.com/wentaoL86/awesome-body-language
- …