7 research outputs found
Baryons and baryonic matter in the large Nc and heavy quark limits
This paper explores properties of baryons and finite density baryonic matter
in an artificial world in which Nc, the number of colors, is large and the
quarks of all species are degenerate and much larger than {\Lambda}_QCD. It has
long been known that in large Nc QCD, baryons composed entirely of heavy quarks
are accurately described in the mean-field approximation. However, the detailed
properties of baryons in the combined large Nc and heavy quark limits have not
been fully explored. Here some basic properties of baryons are computed using a
variational approach. At leading order in both the large Nc and heavy quark
expansions the baryon mass is computed explicitly as is the baryon form factor.
Baryonic matter, the analog of nuclear matter in this artificial world, should
also be well described in the mean-field approximation. In the special case
where all baryons have an identical spin flavor structure, it is shown that in
the formal heavy quark and large Nc limit interactions between baryons are
strictly repulsive at low densities. The energy per baryon is computed in this
limit and found to be exponentially small. It is shown that when the
restriction to baryons with an identical spin-flavor structure is dropped, a
phase of baryonic matter exists with a density of 2Nf times that for the
restricted case but with the same energy (where Nf is the number of degenerate
flavors). It is shown that this phase is at least metastable.Comment: 19 page
Towards Understanding Sycophancy in Language Models
Human feedback is commonly utilized to finetune AI assistants. But human
feedback may also encourage model responses that match user beliefs over
truthful ones, a behaviour known as sycophancy. We investigate the prevalence
of sycophancy in models whose finetuning procedure made use of human feedback,
and the potential role of human preference judgments in such behavior. We first
demonstrate that five state-of-the-art AI assistants consistently exhibit
sycophancy across four varied free-form text-generation tasks. To understand if
human preferences drive this broadly observed behavior, we analyze existing
human preference data. We find that when a response matches a user's views, it
is more likely to be preferred. Moreover, both humans and preference models
(PMs) prefer convincingly-written sycophantic responses over correct ones a
non-negligible fraction of the time. Optimizing model outputs against PMs also
sometimes sacrifices truthfulness in favor of sycophancy. Overall, our results
indicate that sycophancy is a general behavior of state-of-the-art AI
assistants, likely driven in part by human preference judgments favoring
sycophantic responses.Comment: 32 pages, 20 figure
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
We describe our early efforts to red team language models in order to
simultaneously discover, measure, and attempt to reduce their potentially
harmful outputs. We make three main contributions. First, we investigate
scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B
parameters) and 4 model types: a plain language model (LM); an LM prompted to
be helpful, honest, and harmless; an LM with rejection sampling; and a model
trained to be helpful and harmless using reinforcement learning from human
feedback (RLHF). We find that the RLHF models are increasingly difficult to red
team as they scale, and we find a flat trend with scale for the other model
types. Second, we release our dataset of 38,961 red team attacks for others to
analyze and learn from. We provide our own analysis of the data and find a
variety of harmful outputs, which range from offensive language to more subtly
harmful non-violent unethical outputs. Third, we exhaustively describe our
instructions, processes, statistical methodologies, and uncertainty about red
teaming. We hope that this transparency accelerates our ability to work
together as a community in order to develop shared norms, practices, and
technical standards for how to red team language models
Language Models (Mostly) Know What They Know
We study whether language models can evaluate the validity of their own
claims and predict which questions they will be able to answer correctly. We
first show that larger models are well-calibrated on diverse multiple choice
and true/false questions when they are provided in the right format. Thus we
can approach self-evaluation on open-ended sampling tasks by asking models to
first propose answers, and then to evaluate the probability "P(True)" that
their answers are correct. We find encouraging performance, calibration, and
scaling for P(True) on a diverse array of tasks. Performance at self-evaluation
further improves when we allow models to consider many of their own samples
before predicting the validity of one specific possibility. Next, we
investigate whether models can be trained to predict "P(IK)", the probability
that "I know" the answer to a question, without reference to any particular
proposed answer. Models perform well at predicting P(IK) and partially
generalize across tasks, though they struggle with calibration of P(IK) on new
tasks. The predicted P(IK) probabilities also increase appropriately in the
presence of relevant source materials in the context, and in the presence of
hints towards the solution of mathematical word problems. We hope these
observations lay the groundwork for training more honest models, and for
investigating how honesty generalizes to cases where models are trained on
objectives other than the imitation of human writing.Comment: 23+17 pages; refs added, typos fixe
Specific versus General Principles for Constitutional AI
Human feedback can prevent overtly harmful utterances in conversational
models, but may not automatically mitigate subtle problematic behaviors such as
a stated desire for self-preservation or power. Constitutional AI offers an
alternative, replacing human feedback with feedback from AI models conditioned
only on a list of written principles. We find this approach effectively
prevents the expression of such behaviors. The success of simple principles
motivates us to ask: can models learn general ethical behaviors from only a
single written principle? To test this, we run experiments using a principle
roughly stated as "do what's best for humanity". We find that the largest
dialogue models can generalize from this short constitution, resulting in
harmless assistants with no stated interest in specific motivations like power.
A general principle may thus partially avoid the need for a long list of
constitutions targeting potentially harmful behaviors. However, more detailed
constitutions still improve fine-grained control over specific types of harms.
This suggests both general and specific principles have value for steering AI
safely
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Humans are capable of strategically deceptive behavior: behaving helpfully in
most situations, but then behaving very differently in order to pursue
alternative objectives when given the opportunity. If an AI system learned such
a deceptive strategy, could we detect it and remove it using current
state-of-the-art safety training techniques? To study this question, we
construct proof-of-concept examples of deceptive behavior in large language
models (LLMs). For example, we train models that write secure code when the
prompt states that the year is 2023, but insert exploitable code when the
stated year is 2024. We find that such backdoor behavior can be made
persistent, so that it is not removed by standard safety training techniques,
including supervised fine-tuning, reinforcement learning, and adversarial
training (eliciting unsafe behavior and then training to remove it). The
backdoor behavior is most persistent in the largest models and in models
trained to produce chain-of-thought reasoning about deceiving the training
process, with the persistence remaining even when the chain-of-thought is
distilled away. Furthermore, rather than removing backdoors, we find that
adversarial training can teach models to better recognize their backdoor
triggers, effectively hiding the unsafe behavior. Our results suggest that,
once a model exhibits deceptive behavior, standard techniques could fail to
remove such deception and create a false impression of safety.Comment: updated to add missing acknowledgement