25 research outputs found
AudioFormer: Audio Transformer learns audio feature representations from discrete acoustic codes
We propose a method named AudioFormer,which learns audio feature
representations through the acquisition of discrete acoustic codes and
subsequently fine-tunes them for audio classification tasks. Initially,we
introduce a novel perspective by considering the audio classification task as a
form of natural language understanding (NLU). Leveraging an existing neural
audio codec model,we generate discrete acoustic codes and utilize them to train
a masked language model (MLM),thereby obtaining audio feature representations.
Furthermore,we pioneer the integration of a Multi-Positive sample Contrastive
(MPC) learning approach. This method enables the learning of joint
representations among multiple discrete acoustic codes within the same audio
input. In our experiments,we treat discrete acoustic codes as textual data and
train a masked language model using a cloze-like methodology,ultimately
deriving high-quality audio representations. Notably,the MPC learning technique
effectively captures collaborative representations among distinct positive
samples. Our research outcomes demonstrate that AudioFormer attains
significantly improved performance compared to prevailing monomodal audio
classification models across multiple datasets,and even outperforms
audio-visual multimodal classification models on select datasets.
Specifically,our approach achieves remarkable results on datasets including
AudioSet (2M,20K),and FSD50K,with performance scores of 53.9,45.1,and
65.6,respectively. We have openly shared both the code and models:
https://github.com/LZH-0225/AudioFormer.git.Comment: 9 pages, 4 figure
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, thereby providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
Communication systems to date primarily aim at reliably communicating bit
sequences. Such an approach provides efficient engineering designs that are
agnostic to the meanings of the messages or to the goal that the message
exchange aims to achieve. Next generation systems, however, can be potentially
enriched by folding message semantics and goals of communication into their
design. Further, these systems can be made cognizant of the context in which
communication exchange takes place, providing avenues for novel design
insights. This tutorial summarizes the efforts to date, starting from its early
adaptations, semantic-aware and task-oriented communications, covering the
foundations, algorithms and potential implementations. The focus is on
approaches that utilize information theory to provide the foundations, as well
as the significant role of learning in semantics and task-aware communications.Comment: 28 pages, 14 figure
Methods for efficient object categorization, detection, scene recognition, and image search
In the past few years there has been a tremendous growth in the usage of digital images. Users can now access millions of photos, a fact that poses the need of having methods that can efficiently and effectively search the visual information of interest. In this thesis, we propose methods to learn image representations to compactly represent a large collection of images, enabling accurate image recognition with linear classification models which offer the advantage of being efficient to both train and test. The entries of our descriptors are the output of a set of basis classifiers evaluated on the image, which capture the presence or absence of a set of high-level visual concepts. We propose two different techniques to automatically discover the visual concepts and learn the basis classifiers from a given labeled dataset of pictures, producing descriptors that highly-discriminate the original categories of the dataset. We empirically show that these descriptors are able to encode new unseen pictures, and produce state-of-the-art results in conjunct with cheap linear classifiers. We describe several strategies to aggregate the outputs of basis classifiers evaluated on multiple subwindows of the image in order to handle cases when the photo contains multiple objects and large amounts of clutter. We extend this framework for the task of object detection, where the goal is to spatially localize an object within an image. We use the output of a collection of detectors trained in an offline stage as features for new detection problems, showing competitive results with the current state of the art. Since generating rich manual annotations for an image dataset is a crucial limit of modern methods in object localization and detection, in this thesis we also propose a method to automatically generate training data for an object detector in a weakly-supervised fashion, yielding considerable savings in human annotation efforts. We show that our automatically-generated regions can be used to train object detectors with recognition results remarkably close to those obtained by training on manually annotated bounding boxes