952 research outputs found
Quantum generalized Reed-Solomon codes: Unified framework for quantum MDS codes
We construct a new family of quantum MDS codes from classical generalized
Reed-Solomon codes and derive the necessary and sufficient condition under
which these quantum codes exist. We also give code bounds and show how to
construct them analytically. We find that existing quantum MDS codes can be
unified under these codes in the sense that when a quantum MDS code exists,
then a quantum code of this type with the same parameters also exists. Thus as
far as is known at present, they are the most important family of quantum MDS
codes.Comment: 9 pages, no figure
Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity
Recent breakthroughs in natural language processing (NLP) have permitted the
synthesis and comprehension of coherent text in an open-ended way, therefore
translating the theoretical algorithms into practical applications. The large
language models (LLMs) have significantly impacted businesses such as report
summarization software and copywriters. Observations indicate, however, that
LLMs may exhibit social prejudice and toxicity, posing ethical and societal
dangers of consequences resulting from irresponsibility. Large-scale benchmarks
for accountable LLMs should consequently be developed. Although several
empirical investigations reveal the existence of a few ethical difficulties in
advanced LLMs, there is little systematic examination and user study of the
risks and harmful behaviors of current LLM usage. To further educate future
efforts on constructing ethical LLMs responsibly, we perform a qualitative
research method called ``red teaming'' on OpenAI's ChatGPT\footnote{In this
paper, ChatGPT refers to the version released on Dec 15th.} to better
understand the practical features of ethical dangers in recent LLMs. We analyze
ChatGPT comprehensively from four perspectives: 1) \textit{Bias} 2)
\textit{Reliability} 3) \textit{Robustness} 4) \textit{Toxicity}. In accordance
with our stated viewpoints, we empirically benchmark ChatGPT on multiple sample
datasets. We find that a significant number of ethical risks cannot be
addressed by existing benchmarks, and hence illustrate them via additional case
studies. In addition, we examine the implications of our findings on AI ethics
and harmal behaviors of ChatGPT, as well as future problems and practical
design considerations for responsible LLMs. We believe that our findings may
give light on future efforts to determine and mitigate the ethical hazards
posed by machines in LLM applications.Comment: Technical Repor
RAEDiff: Denoising Diffusion Probabilistic Models Based Reversible Adversarial Examples Self-Generation and Self-Recovery
Collected and annotated datasets, which are obtained through extensive
efforts, are effective for training Deep Neural Network (DNN) models. However,
these datasets are susceptible to be misused by unauthorized users, resulting
in infringement of Intellectual Property (IP) rights owned by the dataset
creators. Reversible Adversarial Exsamples (RAE) can help to solve the issues
of IP protection for datasets. RAEs are adversarial perturbed images that can
be restored to the original. As a cutting-edge approach, RAE scheme can serve
the purposes of preventing unauthorized users from engaging in malicious model
training, as well as ensuring the legitimate usage of authorized users.
Nevertheless, in the existing work, RAEs still rely on the embedded auxiliary
information for restoration, which may compromise their adversarial abilities.
In this paper, a novel self-generation and self-recovery method, named as
RAEDiff, is introduced for generating RAEs based on a Denoising Diffusion
Probabilistic Models (DDPM). It diffuses datasets into a Biased Gaussian
Distribution (BGD) and utilizes the prior knowledge of the DDPM for generating
and recovering RAEs. The experimental results demonstrate that RAEDiff
effectively self-generates adversarial perturbations for DNN models, including
Artificial Intelligence Generated Content (AIGC) models, while also exhibiting
significant self-recovery capabilities
CRISPR/Cas9-Facilitated Chromosome Engineering to Model Human Chromosomal Alterations
Rodents, particularly the mouse, have been used extensively for genetic modeling and analysis of human chromosomal alterations based on the syntenic conservations between the human and rodent genomes. In this article, we will discuss the emergence of CRISPR/Cas9-facilitated chromosome engineering techniques, which may open up a new avenue to study human diseases associated with chromosomal abnormalities, such as Down syndrome and cancer
- …