24 research outputs found
Whose Town? The Rise of the Elite in Augustan Pompeii
During the Augustan period, Pompeii’s elite restructured city landmarks to augment their own power. This paper studies the intersection of class and urban geography in this key moment of Pompeii’s history, identifying how changing physical landmarks benefits or disadvantages multiple classes of Pompeian residents. Although the impact of the rise of Augustus on the city of Rome has been studied extensively, this paper supplements that research by studying physical changes within the south Italian setting of Pompeii. In the Augustan period, Pompeii’s urban environment increasingly emphasized major public spaces and elite-dominated monumental architecture over earlier neighborhood landmarks that gave prestige to multiple classes. Due to this shift, the power of Pompeii’s many non-elite classes decreased throughout the town while the elite capitalized on urban changes to increase their influence over Pompeii. Augustan Pompeii transitioned from a mixed-power to an elite-dominated city
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Building on Petroni et al. (2019), we propose two new probing tasks analyzing factual knowledge stored in Pretrained Language Models (PLMs). (1) Negation. We find that PLMs do not distinguish between negated (“Birds cannot [MASK]”) and non-negated (“Birds can [MASK]”) cloze questions. (2) Mispriming. Inspired by priming methods in human psychology, we add “misprimes” to cloze questions (“Talk? Birds can [MASK]”). We find that PLMs are easily distracted by misprimes. These results suggest that PLMs still have a long way to go to adequately learn human-like factual knowledge
Hic Est Uxor Mihei: How Roman Funerary Portraits Carve the Ideal Freedwoman
This paper examines the depiction of Roman freedwomen (former slaves) in thirty-five late Republican and Augustan funerary portraits. Extant portraits utilize a complex visual and written vocabulary to reveal a wide variety of views of freedwomen’s status and agency. This paper relies upon analyses of the cultural climates of the late Republican and Augustan period, careful interrogation of the material evidence through the lens of both post-structuralist and affective theory, and the use of case studies. Ultimately, it argues that funerary portraits create diverse representations of the ideal freedwoman that become part of an ongoing cultural dialogue concerning the place of freedwomen in Roman society
Static Embeddings as Efficient Knowledge Bases?
Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as “Paris is the capital of [MASK]” are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary
Language Models with Rationality
While large language models (LLMs) are proficient at question-answering (QA),
it is not always clear how (or even if) an answer follows from their latent
"beliefs". This lack of interpretability is a growing impediment to widespread
use of LLMs. To address this, our goals are to make model beliefs and their
inferential relationships explicit, and to resolve inconsistencies that may
exist, so that answers are supported by interpretable chains of reasoning drawn
from a consistent network of beliefs. Our approach, which we call REFLEX, is to
add a rational, self-reflecting layer on top of the LLM. First, given a
question, we construct a belief graph using a backward-chaining process to
materialize relevant model beliefs (including beliefs about answer candidates)
and their inferential relationships. Second, we identify and minimize
contradictions in that graph using a formal constraint reasoner. We find that
REFLEX significantly improves consistency (by 8%-11% absolute) without harming
overall answer accuracy, resulting in answers supported by faithful chains of
reasoning drawn from a more consistent belief system. This suggests a new style
of system architecture in which an LLM extended with a rational layer can
provide an interpretable window into system beliefs, add a systematic reasoning
capability, and repair latent inconsistencies present in the LLM
Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models
Recently, it has been found that monolingual English language models can be used as knowledge bases. Instead of structural knowledge base queries, masked sentences such as “Paris is the capital of [MASK]” are used as probes. We translate the established benchmarks TREx and GoogleRE into 53 languages. Working with mBERT, we investigate three questions. (i) Can mBERT be used as a multilingual knowledge base? Most prior work only considers English. Extending research to multiple languages is important for diversity and accessibility. (ii) Is mBERT’s performance as knowledge base language-independent or does it vary from language to language? (iii) A multilingual model is trained on more text, e.g., mBERT is trained on 104 Wikipedias. Can mBERT leverage this for better performance? We find that using mBERT as a knowledge base yields varying performance across languages and pooling predictions across languages improves performance. Conversely, mBERT exhibits a language bias; e.g., when queried in Italian, it tends to predict Italy as the country of origin