15,240 research outputs found
A review of abnormal behavior detection in activities of daily living
Abnormal behavior detection (ABD) systems are built to automatically identify and recognize abnormal behavior from various input data types, such as sensor-based and vision-based input. As much as the attention received for ABD systems, the number of studies on ABD in activities of daily living (ADL) is limited. Owing to the increasing rate of elderly accidents in the home compound, ABD in ADL research should be given as much attention to preventing accidents by sending out signals when abnormal behavior such as falling is detected. In this study, we compare and contrast the formation of the ABD system in ADL from input data types (sensor-based input and vision-based input) to modeling techniques (conventional and deep learning approaches). We scrutinize the public datasets available and provide solutions for one of the significant issues: the lack of datasets in ABD in ADL. This work aims to guide new research to understand the field of ABD in ADL better and serve as a reference for future study of better Ambient Assisted Living with the growing smart home trend
CLIP goes 3D: Leveraging Prompt Tuning for Language Grounded 3D Recognition
Vision-Language models like CLIP have been widely adopted for various tasks
due to their impressive zero-shot capabilities. However, CLIP is not suitable
for extracting 3D geometric features as it was trained on only images and text
by natural language supervision. We work on addressing this limitation and
propose a new framework termed CG3D (CLIP Goes 3D) where a 3D encoder is
learned to exhibit zero-shot capabilities. CG3D is trained using triplets of
pointclouds, corresponding rendered 2D images, and texts using natural language
supervision. To align the features in a multimodal embedding space, we utilize
contrastive loss on 3D features obtained from the 3D encoder, as well as visual
and text features extracted from CLIP. We note that the natural images used to
train CLIP and the rendered 2D images in CG3D have a distribution shift.
Attempting to train the visual and text encoder to account for this shift
results in catastrophic forgetting and a notable decrease in performance. To
solve this, we employ prompt tuning and introduce trainable parameters in the
input space to shift CLIP towards the 3D pre-training dataset utilized in CG3D.
We extensively test our pre-trained CG3D framework and demonstrate its
impressive capabilities in zero-shot, open scene understanding, and retrieval
tasks. Further, it also serves as strong starting weights for fine-tuning in
downstream 3D recognition tasks.Comment: Website: https://jeya-maria-jose.github.io/cg3d-web
Identity, Power, and Prestige in Switzerland's Multilingual Education
Switzerland is known for its multilingualism, yet not all languages are represented equally in society. The situation is exacerbated by the influx of heritage languages and English through migration and globalization processes which challenge the traditional education system. This study is the first to investigate how schools in Grisons, Fribourg, and Zurich negotiate neoliberal forces leading to a growing necessity of English, a romanticized view on national languages, and the social justice perspective of institutionalizing heritage languages. It uncovers power and legitimacy issues and showcases students' and teachers' complex identities to advocate equitable multilingual education
Knowledge-refined Denoising Network for Robust Recommendation
Knowledge graph (KG), which contains rich side information, becomes an
essential part to boost the recommendation performance and improve its
explainability. However, existing knowledge-aware recommendation methods
directly perform information propagation on KG and user-item bipartite graph,
ignoring the impacts of \textit{task-irrelevant knowledge propagation} and
\textit{vulnerability to interaction noise}, which limits their performance. To
solve these issues, we propose a robust knowledge-aware recommendation
framework, called \textit{Knowledge-refined Denoising Network} (KRDN), to prune
the task-irrelevant knowledge associations and noisy implicit feedback
simultaneously. KRDN consists of an adaptive knowledge refining strategy and a
contrastive denoising mechanism, which are able to automatically distill
high-quality KG triplets for aggregation and prune noisy implicit feedback
respectively. Besides, we also design the self-adapted loss function and the
gradient estimator for model optimization. The experimental results on three
benchmark datasets demonstrate the effectiveness and robustness of KRDN over
the state-of-the-art knowledge-aware methods like KGIN, MCCLK, and KGCL, and
also outperform robust recommendation models like SGL and SimGCL
Fillers in Spoken Language Understanding: Computational and Psycholinguistic Perspectives
Disfluencies (i.e. interruptions in the regular flow of speech), are
ubiquitous to spoken discourse. Fillers ("uh", "um") are disfluencies that
occur the most frequently compared to other kinds of disfluencies. Yet, to the
best of our knowledge, there isn't a resource that brings together the research
perspectives influencing Spoken Language Understanding (SLU) on these speech
events. This aim of this article is to synthesise a breadth of perspectives in
a holistic way; i.e. from considering underlying (psycho)linguistic theory, to
their annotation and consideration in Automatic Speech Recognition (ASR) and
SLU systems, to lastly, their study from a generation standpoint. This article
aims to present the perspectives in an approachable way to the SLU and
Conversational AI community, and discuss moving forward, what we believe are
the trends and challenges in each area.Comment: To appear in TAL Journa
ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT
Large language models (LLMs) such as ChatGPT have recently demonstrated
significant potential in mathematical abilities, providing valuable reasoning
paradigm consistent with human natural language. However, LLMs currently have
difficulty in bridging perception, language understanding and reasoning
capabilities due to incompatibility of the underlying information flow among
them, making it challenging to accomplish tasks autonomously. On the other
hand, abductive learning (ABL) frameworks for integrating the two abilities of
perception and reasoning has seen significant success in inverse decipherment
of incomplete facts, but it is limited by the lack of semantic understanding of
logical reasoning rules and the dependence on complicated domain knowledge
representation. This paper presents a novel method (ChatABL) for integrating
LLMs into the ABL framework, aiming at unifying the three abilities in a more
user-friendly and understandable manner. The proposed method uses the strengths
of LLMs' understanding and logical reasoning to correct the incomplete logical
facts for optimizing the performance of perceptual module, by summarizing and
reorganizing reasoning rules represented in natural language format. Similarly,
perceptual module provides necessary reasoning examples for LLMs in natural
language format. The variable-length handwritten equation deciphering task, an
abstract expression of the Mayan calendar decoding, is used as a testbed to
demonstrate that ChatABL has reasoning ability beyond most existing
state-of-the-art methods, which has been well supported by comparative studies.
To our best knowledge, the proposed ChatABL is the first attempt to explore a
new pattern for further approaching human-level cognitive ability via natural
language interaction with ChatGPT
Learning Robust Visual-Semantic Embedding for Generalizable Person Re-identification
Generalizable person re-identification (Re-ID) is a very hot research topic
in machine learning and computer vision, which plays a significant role in
realistic scenarios due to its various applications in public security and
video surveillance. However, previous methods mainly focus on the visual
representation learning, while neglect to explore the potential of semantic
features during training, which easily leads to poor generalization capability
when adapted to the new domain. In this paper, we propose a Multi-Modal
Equivalent Transformer called MMET for more robust visual-semantic embedding
learning on visual, textual and visual-textual tasks respectively. To further
enhance the robust feature learning in the context of transformer, a dynamic
masking mechanism called Masked Multimodal Modeling strategy (MMM) is
introduced to mask both the image patches and the text tokens, which can
jointly works on multimodal or unimodal data and significantly boost the
performance of generalizable person Re-ID. Extensive experiments on benchmark
datasets demonstrate the competitive performance of our method over previous
approaches. We hope this method could advance the research towards
visual-semantic representation learning. Our source code is also publicly
available at https://github.com/JeremyXSC/MMET
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Ambiguous Medical Image Segmentation using Diffusion Models
Collective insights from a group of experts have always proven to outperform
an individual's best diagnostic for clinical tasks. For the task of medical
image segmentation, existing research on AI-based alternatives focuses more on
developing models that can imitate the best individual rather than harnessing
the power of expert groups. In this paper, we introduce a single diffusion
model-based approach that produces multiple plausible outputs by learning a
distribution over group insights. Our proposed model generates a distribution
of segmentation masks by leveraging the inherent stochastic sampling process of
diffusion using only minimal additional learning. We demonstrate on three
different medical image modalities- CT, ultrasound, and MRI that our model is
capable of producing several possible variants while capturing the frequencies
of their occurrences. Comprehensive results show that our proposed approach
outperforms existing state-of-the-art ambiguous segmentation networks in terms
of accuracy while preserving naturally occurring variation. We also propose a
new metric to evaluate the diversity as well as the accuracy of segmentation
predictions that aligns with the interest of clinical practice of collective
insights
Sign Language Translation from Instructional Videos
The advances in automatic sign language translation (SLT) to spoken languages
have been mostly benchmarked with datasets of limited size and restricted
domains. Our work advances the state of the art by providing the first baseline
results on How2Sign, a large and broad dataset.
We train a Transformer over I3D video features, using the reduced BLEU as a
reference metric for validation, instead of the widely used BLEU score. We
report a result of 8.03 on the BLEU score, and publish the first open-source
implementation of its kind to promote further advances.Comment: Paper accepted at WiCV @CVPR2
- …