56 research outputs found
A Max-relevance-min-divergence Criterion for Data Discretization with Applications on Naive Bayes
In many classification models, data is discretized to better estimate its
distribution. Existing discretization methods often target at maximizing the
discriminant power of discretized data, while overlooking the fact that the
primary target of data discretization in classification is to improve the
generalization performance. As a result, the data tend to be over-split into
many small bins since the data without discretization retain the maximal
discriminant information. Thus, we propose a Max-Dependency-Min-Divergence
(MDmD) criterion that maximizes both the discriminant information and
generalization ability of the discretized data. More specifically, the
Max-Dependency criterion maximizes the statistical dependency between the
discretized data and the classification variable while the Min-Divergence
criterion explicitly minimizes the JS-divergence between the training data and
the validation data for a given discretization scheme. The proposed MDmD
criterion is technically appealing, but it is difficult to reliably estimate
the high-order joint distributions of attributes and the classification
variable. We hence further propose a more practical solution,
Max-Relevance-Min-Divergence (MRmD) discretization scheme, where each attribute
is discretized separately, by simultaneously maximizing the discriminant
information and the generalization ability of the discretized data. The
proposed MRmD is compared with the state-of-the-art discretization algorithms
under the naive Bayes classification framework on 45 machine-learning benchmark
datasets. It significantly outperforms all the compared methods on most of the
datasets.Comment: Under major revision of Pattern Recognitio
RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation
Large Language Models (LLMs) exhibit remarkable capabilities but are prone to
generating inaccurate or hallucinatory responses. This limitation stems from
their reliance on vast pretraining datasets, making them susceptible to errors
in unseen scenarios. To tackle these challenges, Retrieval-Augmented Generation
(RAG) addresses this by incorporating external, relevant documents into the
response generation process, thus leveraging non-parametric knowledge alongside
LLMs' in-context learning abilities. However, existing RAG implementations
primarily focus on initial input for context retrieval, overlooking the nuances
of ambiguous or complex queries that necessitate further clarification or
decomposition for accurate responses. To this end, we propose learning to
Refine Query for Retrieval Augmented Generation (RQ-RAG) in this paper,
endeavoring to enhance the model by equipping it with capabilities for explicit
rewriting, decomposition, and disambiguation. Our experimental results indicate
that our method, when applied to a 7B Llama2 model, surpasses the previous
state-of-the-art (SOTA) by an average of 1.9\% across three single-hop QA
datasets, and also demonstrates enhanced performance in handling complex,
multi-hop QA datasets. Our code is available at
https://github.com/chanchimin/RQ-RAG
Chinese Open Instruction Generalist: A Preliminary Release
Instruction tuning is widely recognized as a key technique for building
generalist language models, which has attracted the attention of researchers
and the public with the release of InstructGPT~\citep{ouyang2022training} and
ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive progress
in English-oriented large-scale language models (LLMs), it is still
under-explored whether English-based foundation LLMs can perform similarly on
multilingual tasks compared to English tasks with well-designed instruction
tuning and how we can construct the corpora needed for the tuning.
To remedy this gap, we propose the project as an attempt to create a Chinese
instruction dataset by various methods adapted to the intrinsic characteristics
of 4 sub-tasks. We collect around 200k Chinese instruction tuning samples,
which have been manually checked to guarantee high quality. We also summarize
the existing English and Chinese instruction corpora and briefly describe some
potential applications of the newly constructed Chinese instruction corpora.
The resulting \textbf{C}hinese \textbf{O}pen \textbf{I}nstruction
\textbf{G}eneralist (\textbf{COIG}) corpora are available in
Huggingface\footnote{\url{https://huggingface.co/datasets/BAAI/COIG}} and
Github\footnote{\url{https://github.com/FlagOpen/FlagInstruct}}, and will be
continuously updated
LyricWhiz: Robust Multilingual Zero-shot Lyrics Transcription by Whispering to ChatGPT
We introduce LyricWhiz, a robust, multilingual, and zero-shot automatic
lyrics transcription method achieving state-of-the-art performance on various
lyrics transcription datasets, even in challenging genres such as rock and
metal. Our novel, training-free approach utilizes Whisper, a weakly supervised
robust speech recognition model, and GPT-4, today's most performant chat-based
large language model. In the proposed method, Whisper functions as the "ear" by
transcribing the audio, while GPT-4 serves as the "brain," acting as an
annotator with a strong performance for contextualized output selection and
correction. Our experiments show that LyricWhiz significantly reduces Word
Error Rate compared to existing methods in English and can effectively
transcribe lyrics across multiple languages. Furthermore, we use LyricWhiz to
create the first publicly available, large-scale, multilingual lyrics
transcription dataset with a CC-BY-NC-SA copyright license, based on
MTG-Jamendo, and offer a human-annotated subset for noise level estimation and
evaluation. We anticipate that our proposed method and dataset will advance the
development of multilingual lyrics transcription, a challenging and emerging
task.Comment: 9 pages, 2 figures, 5 tables, accepted by ISMIR 202
On the Effectiveness of Speech Self-supervised Learning for Music
Self-supervised learning (SSL) has shown promising results in various speech
and natural language processing applications. However, its efficacy in music
information retrieval (MIR) still remains largely unexplored. While previous
SSL models pre-trained on music recordings may have been mostly closed-sourced,
recent speech models such as wav2vec2.0 have shown promise in music modelling.
Nevertheless, research exploring the effectiveness of applying speech SSL
models to music recordings has been limited. We explore the music adaption of
SSL with two distinctive speech-related models, data2vec1.0 and Hubert, and
refer to them as music2vec and musicHuBERT, respectively. We train SSL
models with 95M parameters under various pre-training configurations and
systematically evaluate the MIR task performances with 13 different MIR tasks.
Our findings suggest that training with music data can generally improve
performance on MIR tasks, even when models are trained using paradigms designed
for speech. However, we identify the limitations of such existing
speech-oriented designs, especially in modelling polyphonic information. Based
on the experimental results, empirical suggestions are also given for designing
future musical SSL strategies and paradigms
AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling
We introduce AnyGPT, an any-to-any multimodal language model that utilizes
discrete representations for the unified processing of various modalities,
including speech, text, images, and music. AnyGPT can be trained stably without
any alterations to the current large language model (LLM) architecture or
training paradigms. Instead, it relies exclusively on data-level preprocessing,
facilitating the seamless integration of new modalities into LLMs, akin to the
incorporation of new languages. We build a multimodal text-centric dataset for
multimodal alignment pre-training. Utilizing generative models, we synthesize
the first large-scale any-to-any multimodal instruction dataset. It consists of
108k samples of multi-turn conversations that intricately interweave various
modalities, thus equipping the model to handle arbitrary combinations of
multimodal inputs and outputs. Experimental results demonstrate that AnyGPT is
capable of facilitating any-to-any multimodal conversation while achieving
performance comparable to specialized models across all modalities, proving
that discrete representations can effectively and conveniently unify multiple
modalities within a language model. Demos are shown in
https://junzhan2000.github.io/AnyGPT.github.io/Comment: 28 pages, 16 figures, under review, work in progres
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
Self-supervised learning (SSL) has recently emerged as a promising paradigm
for training generalisable models on large-scale data in the fields of vision,
text, and speech. Although SSL has been proven effective in speech and audio,
its application to music audio has yet to be thoroughly explored. This is
primarily due to the distinctive challenges associated with modelling musical
knowledge, particularly its tonal and pitched characteristics of music. To
address this research gap, we propose an acoustic Music undERstanding model
with large-scale self-supervised Training (MERT), which incorporates teacher
models to provide pseudo labels in the masked language modelling (MLM) style
acoustic pre-training. In our exploration, we identified a superior combination
of teacher models, which outperforms conventional speech and audio approaches
in terms of performance. This combination includes an acoustic teacher based on
Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical
teacher based on the Constant-Q Transform (CQT). These teachers effectively
guide our student model, a BERT-style transformer encoder, to better model
music audio. In addition, we introduce an in-batch noise mixture augmentation
to enhance the representation robustness. Furthermore, we explore a wide range
of settings to overcome the instability in acoustic language model
pre-training, which allows our designed paradigm to scale from 95M to 330M
parameters. Experimental results indicate that our model can generalise and
perform well on 14 music understanding tasks and attains state-of-the-art
(SOTA) overall scores. The code and models are online:
https://github.com/yizhilll/MERT
COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Recently, there have been significant advancements in large language models
(LLMs), particularly focused on the English language. These advancements have
enabled these LLMs to understand and execute complex instructions with
unprecedented accuracy and fluency. However, despite these advancements, there
remains a noticeable gap in the development of Chinese instruction tuning. The
unique linguistic features and cultural depth of the Chinese language pose
challenges for instruction tuning tasks. Existing datasets are either derived
from English-centric LLMs or are ill-suited for aligning with the interaction
patterns of real-world Chinese users. To bridge this gap, we introduce
COIG-CQIA, a high-quality Chinese instruction tuning dataset. Our aim is to
build a diverse, wide-ranging instruction-tuning dataset to better align model
behavior with human interactions. To this end, we collect a high-quality
human-written corpus from various sources on the Chinese Internet, including
Q&A communities, Wikis, examinations, and existing NLP datasets. This corpus
was rigorously filtered and carefully processed to form the COIG-CQIA dataset.
Furthermore, we train models of various scales on different subsets of CQIA,
following in-depth evaluation and analyses. The findings from our experiments
offer valuable insights for selecting and developing Chinese instruction-tuning
datasets. We also find that models trained on CQIA-Subset achieve competitive
results in human assessment as well as knowledge and security benchmarks. Data
are available at https://huggingface.co/datasets/m-a-p/COIG-CQI
- …