48 research outputs found
Machine Psychology: Investigating Emergent Capabilities and Behavior in Large Language Models Using Psychological Methods
Large language models (LLMs) are currently at the forefront of intertwining
AI systems with human communication and everyday life. Due to rapid
technological advances and their extreme versatility, LLMs nowadays have
millions of users and are at the cusp of being the main go-to technology for
information retrieval, content generation, problem-solving, etc. Therefore, it
is of great importance to thoroughly assess and scrutinize their capabilities.
Due to increasingly complex and novel behavioral patterns in current LLMs, this
can be done by treating them as participants in psychology experiments that
were originally designed to test humans. For this purpose, the paper introduces
a new field of research called "machine psychology". The paper outlines how
different subfields of psychology can inform behavioral tests for LLMs. It
defines methodological standards for machine psychology research, especially by
focusing on policies for prompt designs. Additionally, it describes how
behavioral patterns discovered in LLMs are to be interpreted. In sum, machine
psychology aims to discover emergent abilities in LLMs that cannot be detected
by most traditional natural language processing benchmarks
Why we need biased AI -- How including cognitive and ethical machine biases can enhance AI systems
This paper stresses the importance of biases in the field of artificial
intelligence (AI) in two regards. First, in order to foster efficient
algorithmic decision-making in complex, unstable, and uncertain real-world
environments, we argue for the structurewise implementation of human cognitive
biases in learning algorithms. Secondly, we argue that in order to achieve
ethical machine behavior, filter mechanisms have to be applied for selecting
biased training stimuli that represent social or behavioral traits that are
ethically desirable. We use insights from cognitive science as well as ethics
and apply them to the AI field, combining theoretical considerations with seven
case studies depicting tangible bias implementation scenarios. Ultimately, this
paper is the first tentative step to explicitly pursue the idea of a
re-evaluation of the ethical significance of machine biases, as well as putting
the idea forth to implement cognitive biases into machines
Beyond the Prediction Paradigm: Challenges for AI in the Struggle Against Organized Crime
In the future, audiological rehabilitation of adults with hearing loss will be more available, personalized and thorough due to the possibilities offered by the internet. By using the internet as a platform it is also possible to perform the process of rehabilitation in a cost-effective way. With tailored online rehabilitation programs containing topics such as communication strategies, hearing tactics and how to handle hearing aids it might be possible to foster behavioral changes that will positively affect hearing aid users. Four studies were carried out in this thesis. The first study investigated internet usage among adults with hearing loss. In the second study the administration format, online vs. paper- and pencil, of four standardized questionnaires was evaluated. Finally two randomized controlled trials were performed evaluating the efficacy of online rehabilitation programs including professional guidance by an audiologist. The programs lasted over five weeks and were designed for experienced adult hearing-aid users. The effects of the online programs were compared with the effects of a control group. It can be concluded that the use of computers and the internet overall is at least at the same level for people with hearing loss as for the general age-matched population in Sweden. Furthermore, for three of the four included questionnaires, the participants’ scores remained the same across formats. It is however recommended that the administration format remain consistent across assessment points. Finally, results from the two concluding intervention studies provide preliminary evidence that the internet can be used to deliver education and rehabilitation to experienced hearing aid users who report residual hearing problems and that their problems are reduced by the intervention; however the content and design of the online rehabilitation program requires further investigation
Human-Like Intuitive Behavior and Reasoning Biases Emerged in Language Models -- and Disappeared in GPT-4
Large language models (LLMs) are currently at the forefront of intertwining
AI systems with human communication and everyday life. Therefore, it is of
great importance to evaluate their emerging abilities. In this study, we show
that LLMs, most notably GPT-3, exhibit behavior that strikingly resembles
human-like intuition -- and the cognitive errors that come with it. However,
LLMs with higher cognitive capabilities, in particular ChatGPT and GPT-4,
learned to avoid succumbing to these errors and perform in a hyperrational
manner. For our experiments, we probe LLMs with the Cognitive Reflection Test
(CRT) as well as semantic illusions that were originally designed to
investigate intuitive decision-making in humans. Moreover, we probe how sturdy
the inclination for intuitive-like decision-making is. Our study demonstrates
that investigating LLMs with methods from psychology has the potential to
reveal otherwise unknown emergent traits.Comment: arXiv admin note: substantial text overlap with arXiv:2212.0520
Prinzipien der Theoriekonstruktion
Die Sozialphilosophie steckt in einer Theoriekrise. Durch die immerwährende Selbstthematisierung, also durch die Errichtung von nach außen abgeschlossenen Zitierkartellen hat sie den Kontakt zu den gesellschaftlichen Verhältnissen verloren. Die immerwährende Beschäftigung mit den Klassikern hat sie davon abgehalten, ihre eigenen Theoriemodelle gemäß der gesellschaftsstrukturellen Realität zu aktualisieren. Doch die Inkongruenz der eigenen Semantik mit der sozialen Welt lässt auch die kritischen Motive verpuffen, die der Sozialphilosophie eigen sind
Olaf Hoffjann/Hans-Jürgen Arlt: Die nächste Öffentlichkeit. Theorieentwurf und Szenarien
Das Buch der beiden Kommunikationswissenschaftler Olaf Hoffjann und Hans-Jürgen Arlt beginnt mit der Aussage, der Computer sei das Leitmedium der titelgebenden „nächsten Öffentlichkeit“. Wer nun allerdings eine Untersuchung darüber erwartet, wie digitale Informations- und Kommunikationstechnologien jene „nächste Öffentlichkeit“ beeinflussen, der liegt falsch. Den Autoren geht es nicht um eine medientheoretische, sondern systemfunktionalistisch zugeschnittene Untersuchung, im Zuge derer eine Konzeption von Öffentlichkeit entwickelt wird, welche diese als Funktionssystem mit spezifischen Eigenschaften versteht.