113 research outputs found
Alternating Direction Method of Multipliers Based on -norm for Multiple Measurement Vector Problem
In this paper, we propose an alternating direction method of multipliers
(ADMM)-based optimization algorithm to achieve better undersampling rate for
multiple measurement vector (MMV) problem. The core is to introduce the
-norm sparsity constraint to describe the joint-sparsity of the MMV
problem, which is different from the widely used -norm constraint
in the existing research. In order to illustrate the better performance of
-norm, first this paper proves the equivalence of the sparsity of
the row support set of a matrix and its -norm. Afterward, the MMV
problem based on -norm is proposed. Moreover, building on the
Kurdyka-Lojasiewicz property, this paper establishes that the sequence
generated by ADMM globally converges to the optimal point of the MMV problem.
Finally, the performance of our algorithm and comparison with other algorithms
under different conditions is studied by simulated examples.Comment: 24 pages, 5 figures, 4 table
Web Design for Low Bandwidth Areas
This study gives an overview of the issues and solutions to develop Web sites for low bandwidth areas. It sheds lights on the fields in web design, cross-cultural environment, low bandwidth, and mobile web design. It provides some examples and potential solutions from the design and technique perspective to solve low bandwidth problems. And finally a demo project was created to prove the correctness of the analysis.Master of Science in Information Scienc
Dual-Refinement: Joint Label and Feature Refinement for Unsupervised Domain Adaptive Person Re-Identification
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a
challenging task due to the missing of labels for the target domain data. To
handle this problem, some recent works adopt clustering algorithms to off-line
generate pseudo labels, which can then be used as the supervision signal for
on-line feature learning in the target domain. However, the off-line generated
labels often contain lots of noise that significantly hinders the
discriminability of the on-line learned features, and thus limits the final UDA
re-ID performance. To this end, we propose a novel approach, called
Dual-Refinement, that jointly refines pseudo labels at the off-line clustering
phase and features at the on-line training phase, to alternatively boost the
label purity and feature discriminability in the target domain for more
reliable re-ID. Specifically, at the off-line phase, a new hierarchical
clustering scheme is proposed, which selects representative prototypes for
every coarse cluster. Thus, labels can be effectively refined by using the
inherent hierarchical information of person images. Besides, at the on-line
phase, we propose an instant memory spread-out (IM-spread-out) regularization,
that takes advantage of the proposed instant memory bank to store sample
features of the entire dataset and enable spread-out feature learning over the
entire training data instantly. Our Dual-Refinement method reduces the
influence of noisy labels and refines the learned features within the
alternative training process. Experiments demonstrate that our method
outperforms the state-of-the-art methods by a large margin.Comment: 14 pages, 5 figure
Chinese Open Instruction Generalist: A Preliminary Release
Instruction tuning is widely recognized as a key technique for building
generalist language models, which has attracted the attention of researchers
and the public with the release of InstructGPT~\citep{ouyang2022training} and
ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive progress
in English-oriented large-scale language models (LLMs), it is still
under-explored whether English-based foundation LLMs can perform similarly on
multilingual tasks compared to English tasks with well-designed instruction
tuning and how we can construct the corpora needed for the tuning.
To remedy this gap, we propose the project as an attempt to create a Chinese
instruction dataset by various methods adapted to the intrinsic characteristics
of 4 sub-tasks. We collect around 200k Chinese instruction tuning samples,
which have been manually checked to guarantee high quality. We also summarize
the existing English and Chinese instruction corpora and briefly describe some
potential applications of the newly constructed Chinese instruction corpora.
The resulting \textbf{C}hinese \textbf{O}pen \textbf{I}nstruction
\textbf{G}eneralist (\textbf{COIG}) corpora are available in
Huggingface\footnote{\url{https://huggingface.co/datasets/BAAI/COIG}} and
Github\footnote{\url{https://github.com/FlagOpen/FlagInstruct}}, and will be
continuously updated
- …