105 research outputs found

    Alternating Direction Method of Multipliers Based on β„“2,0\ell_{2,0}-norm for Multiple Measurement Vector Problem

    Full text link
    In this paper, we propose an alternating direction method of multipliers (ADMM)-based optimization algorithm to achieve better undersampling rate for multiple measurement vector (MMV) problem. The core is to introduce the β„“2,0\ell_{2,0}-norm sparsity constraint to describe the joint-sparsity of the MMV problem, which is different from the widely used β„“2,1\ell_{2,1}-norm constraint in the existing research. In order to illustrate the better performance of β„“2,0\ell_{2,0}-norm, first this paper proves the equivalence of the sparsity of the row support set of a matrix and its β„“2,0\ell_{2,0}-norm. Afterward, the MMV problem based on β„“2,0\ell_{2,0}-norm is proposed. Moreover, building on the Kurdyka-Lojasiewicz property, this paper establishes that the sequence generated by ADMM globally converges to the optimal point of the MMV problem. Finally, the performance of our algorithm and comparison with other algorithms under different conditions is studied by simulated examples.Comment: 24 pages, 5 figures, 4 table

    Web Design for Low Bandwidth Areas

    Get PDF
    This study gives an overview of the issues and solutions to develop Web sites for low bandwidth areas. It sheds lights on the fields in web design, cross-cultural environment, low bandwidth, and mobile web design. It provides some examples and potential solutions from the design and technique perspective to solve low bandwidth problems. And finally a demo project was created to prove the correctness of the analysis.Master of Science in Information Scienc

    Chinese Open Instruction Generalist: A Preliminary Release

    Full text link
    Instruction tuning is widely recognized as a key technique for building generalist language models, which has attracted the attention of researchers and the public with the release of InstructGPT~\citep{ouyang2022training} and ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive progress in English-oriented large-scale language models (LLMs), it is still under-explored whether English-based foundation LLMs can perform similarly on multilingual tasks compared to English tasks with well-designed instruction tuning and how we can construct the corpora needed for the tuning. To remedy this gap, we propose the project as an attempt to create a Chinese instruction dataset by various methods adapted to the intrinsic characteristics of 4 sub-tasks. We collect around 200k Chinese instruction tuning samples, which have been manually checked to guarantee high quality. We also summarize the existing English and Chinese instruction corpora and briefly describe some potential applications of the newly constructed Chinese instruction corpora. The resulting \textbf{C}hinese \textbf{O}pen \textbf{I}nstruction \textbf{G}eneralist (\textbf{COIG}) corpora are available in Huggingface\footnote{\url{https://huggingface.co/datasets/BAAI/COIG}} and Github\footnote{\url{https://github.com/FlagOpen/FlagInstruct}}, and will be continuously updated
    • …
    corecore