1,940 research outputs found
A summary of the 2012 JHU CLSP Workshop on Zero Resource Speech Technologies and Models of Early Language Acquisition
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.5 page(s
Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic Segmentation
This paper studies the problem of weakly open-vocabulary semantic
segmentation (WOVSS), which learns to segment objects of arbitrary classes
using mere image-text pairs. Existing works turn to enhance the vanilla vision
transformer by introducing explicit grouping recognition, i.e., employing
several group tokens/centroids to cluster the image tokens and perform the
group-text alignment. Nevertheless, these methods suffer from a granularity
inconsistency regarding the usage of group tokens, which are aligned in the
all-to-one v.s. one-to-one manners during the training and inference phases,
respectively. We argue that this discrepancy arises from the lack of elaborate
supervision for each group token. To bridge this granularity gap, this paper
explores explicit supervision for the group tokens from the prototypical
knowledge. To this end, this paper proposes the non-learnable prototypical
regularization (NPR) where non-learnable prototypes are estimated from source
features to serve as supervision and enable contrastive matching of the group
tokens. This regularization encourages the group tokens to segment objects with
less redundancy and capture more comprehensive semantic regions, leading to
increased compactness and richness. Based on NPR, we propose the prototypical
guidance segmentation network (PGSeg) that incorporates multi-modal
regularization by leveraging prototypical sources from both images and texts at
different levels, progressively enhancing the segmentation capability with
diverse prototypical patterns. Experimental results show that our proposed
method achieves state-of-the-art performance on several benchmark datasets. The
source code is available at https://github.com/Ferenas/PGSeg.Comment: 14 pages, Accept in NeurIPS 202
Effective weakly supervised semantic frame induction using expression sharing in hierarchical hidden Markov models
We present a framework for the induction of semantic frames from utterances
in the context of an adaptive command-and-control interface. The system is
trained on an individual user's utterances and the corresponding semantic
frames representing controls. During training, no prior information on the
alignment between utterance segments and frame slots and values is available.
In addition, semantic frames in the training data can contain information that
is not expressed in the utterances. To tackle this weakly supervised
classification task, we propose a framework based on Hidden Markov Models
(HMMs). Structural modifications, resulting in a hierarchical HMM, and an
extension called expression sharing are introduced to minimize the amount of
training time and effort required for the user.
The dataset used for the present study is PATCOR, which contains commands
uttered in the context of a vocally guided card game, Patience. Experiments
were carried out on orthographic and phonetic transcriptions of commands,
segmented on different levels of n-gram granularity. The experimental results
show positive effects of all the studied system extensions, with some effect
differences between the different input representations. Moreover, evaluation
experiments on held-out data with the optimal system configuration show that
the extended system is able to achieve high accuracies with relatively small
amounts of training data
MEGA: Multimodal Alignment Aggregation and Distillation For Cinematic Video Segmentation
Previous research has studied the task of segmenting cinematic videos into
scenes and into narrative acts. However, these studies have overlooked the
essential task of multimodal alignment and fusion for effectively and
efficiently processing long-form videos (>60min). In this paper, we introduce
Multimodal alignmEnt aGgregation and distillAtion (MEGA) for cinematic
long-video segmentation. MEGA tackles the challenge by leveraging multiple
media modalities. The method coarsely aligns inputs of variable lengths and
different modalities with alignment positional encoding. To maintain temporal
synchronization while reducing computation, we further introduce an enhanced
bottleneck fusion layer which uses temporal alignment. Additionally, MEGA
employs a novel contrastive loss to synchronize and transfer labels across
modalities, enabling act segmentation from labeled synopsis sentences on video
shots. Our experimental results show that MEGA outperforms state-of-the-art
methods on MovieNet dataset for scene segmentation (with an Average Precision
improvement of +1.19%) and on TRIPOD dataset for act segmentation (with a Total
Agreement improvement of +5.51%)Comment: ICCV 2023 accepte
Multi-view constrained clustering with an incomplete mapping between views
Multi-view learning algorithms typically assume a complete bipartite mapping
between the different views in order to exchange information during the
learning process. However, many applications provide only a partial mapping
between the views, creating a challenge for current methods. To address this
problem, we propose a multi-view algorithm based on constrained clustering that
can operate with an incomplete mapping. Given a set of pairwise constraints in
each view, our approach propagates these constraints using a local similarity
measure to those instances that can be mapped to the other views, allowing the
propagated constraints to be transferred across views via the partial mapping.
It uses co-EM to iteratively estimate the propagation within each view based on
the current clustering model, transfer the constraints across views, and then
update the clustering model. By alternating the learning process between views,
this approach produces a unified clustering model that is consistent with all
views. We show that this approach significantly improves clustering performance
over several other methods for transferring constraints and allows multi-view
clustering to be reliably applied when given a limited mapping between the
views. Our evaluation reveals that the propagated constraints have high
precision with respect to the true clusters in the data, explaining their
benefit to clustering performance in both single- and multi-view learning
scenarios
- …