16 research outputs found

    Loneliness makes the heart grow fonder (of robots). On the effects of loneliness on psychological anthropomorphism

    No full text
    Eyssel FA, Reich N. Loneliness makes the heart grow fonder (of robots). On the effects of loneliness on psychological anthropomorphism. In: Kuzuoka H, ed. Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2013). Piscataway, NJ: IEEE Press; 2013: 121-122

    Theory of Robot Communication: II. Befriending a Robot over Time

    Full text link
    In building on theories of Computer-Mediated Communication (CMC), Human-Robot Interaction, and Media Psychology (i.e. Theory of Affective Bonding), the current paper proposes an explanation of how over time, people experience the mediated or simulated aspects of the interaction with a social robot. In two simultaneously running loops, a more reflective process is balanced with a more affective process. If human interference is detected behind the machine, Robot-Mediated Communication commences, which basically follows CMC assumptions; if human interference remains undetected, Human-Robot Communication comes into play, holding the robot for an autonomous social actor. The more emotionally aroused a robot user is, the more likely they develop an affective relationship with what actually is a machine. The main contribution of this paper is an integration of Computer-Mediated Communication, Human-Robot Communication, and Media Psychology, outlining a full-blown theory of robot communication connected to friendship formation, accounting for communicative features, modes of processing, as well as psychophysiology.Comment: Hoorn, J. F. (2018). Theory of robot communication: II. Befriending a robot over time. arXiv:cs, 2502572(v1), 1-2

    Theory of Robot Communication: I. The Medium is the Communication Partner

    Full text link
    When people use electronic media for their communication, Computer-Mediated Communication (CMC) theories describe the social and communicative aspects of people's interpersonal transactions. When people interact via a remote-controlled robot, many of the CMC theses hold. Yet, what if people communicate with a conversation robot that is (partly) autonomous? Do the same theories apply? This paper discusses CMC theories in confrontation with observations and research data gained from human-robot communication. As a result, I argue for an addition to CMC theorizing when the robot as a medium itself becomes the communication partner. In view of the rise of social robots in coming years, I define the theoretical precepts of a possible next step in CMC, which I elaborate in a second paper.Comment: Hoorn, J. F. (2018). Theory of robot communication: I. The medium is the communication partner. arXiv:cs, 2502565(v1), 1-2

    Finding “H” in HRI: Examining Human Personality Traits, Robotic Anthropomorphism, and Robot Likeability in Human-Robot Interaction

    Get PDF
    The study examines the relationship between the big five personality traits (extroversion, agreeableness, conscientiousness, neuroticism, and openness) and robot likeability and successful HRI implementation in varying human-robot interaction (HRI) situations. Further, this research investigates the influence of human-like attributes in robots (a.k.a. robotic anthropomorphism) on the likeability of robots. The research found that robotic anthropomorphism positively influences the relationship between human personality variables (e.g., extraversion and agreeableness) and robot likeability in human interaction with social robots. Further, anthropomorphism positively influences extraversion and robot likeability during industrial robotic interactions with humans. Extraversion, agreeableness, and neuroticism were found to play a significant role. This research bridges the gap by providing an in-depth understanding of the big five human personality traits, robotic anthropomorphism, and robot likeability in social-collaborative robotics

    Thinking Technology as Human: Affordances, Technology Features, and Egocentric Biases in Technology Anthropomorphism

    Get PDF
    Advanced information technologies (ITs) are increasingly assuming tasks that have previously required human capabilities, such as learning and judgment. What drives this technology anthropomorphism (TA), or the attribution of humanlike characteristics to IT? What is it about users, IT, and their interactions that influences the extent to which people think of technology as humanlike? While TA can have positive effects, such as increasing user trust in technology, what are the negative consequences of TA? To provide a framework for addressing these questions, we advance a theory of TA that integrates the general three-factor anthropomorphism theory in social and cognitive psychology with the needs-affordances-features perspective from the information systems (IS) literature. The theory we construct helps to explain and predict which technological features and affordances are likely: (1) to satisfy users’ psychological needs, and (2) to lead to TA. More importantly, we problematize some negative consequences of TA. Technology features and affordances contributing to TA can intensify users’ anchoring with their elicited agent knowledge and psychological needs and also can weaken the adjustment process in TA under cognitive load. The intensified anchoring and weakened adjustment processes increase egocentric biases that lead to negative consequences. Finally, we propose a research agenda for TA and egocentric biases

    Form, Function and Etiquette – Potential Users’ Perspectives on Social Domestic Robots

    Get PDF
    Social Domestic Robots (SDRs) will soon be launched en masse among commercial markets. Previously, social robots only inhabited scientific labs; now there is an opportunity to conduct experiments to investigate human-robot relationships (including user expectations of social interaction) within more naturalistic, domestic spaces, as well as to test models of technology acceptance. To this end we exposed 20 participants to advertisements prepared by three robotics companies, explaining and “pitching” their SDRs’ functionality (namely, Pepper by SoftBank; Jibo by Jibo, Inc.; and Buddy by Blue Frog Robotics). Participants were interviewed and the data was thematically analyzed to critically examine their initial reactions, concerns and impressions of the three SDRs. Using this approach, we aim to complement existing survey results pertaining to SDRs, and to try to understand the reasoning people use when evaluating SDRs based on what is publicly available to them, namely, advertising. Herein, we unpack issues raised concerning form/function, security/privacy, and the perceived emotional impact of owning an SDR. We discuss implications for the adequate design of socially engaged robotics for domestic applications, and provide four practical steps that could improve the relationships between people and SDRs. An additional contribution is made by expanding existing models of technology acceptance in domestic settings with a new factor of privacy

    Conversational AI Agents: Investigating AI-Specific Characteristics that Induce Anthropomorphism and Trust in Human-AI Interaction

    Get PDF
    The investment in AI agents has steadily increased over the past few years, yet the adoption of these agents has been uneven. Industry reports show that the majority of people do not trust AI agents with important tasks. While the existing IS theories explain users’ trust in IT artifacts, several new studies have raised doubts about the applicability of current theories in the context of AI agents. At first glance, an AI agent might seem like any other technological artifact. However, a more in-depth assessment exposes some fundamental characteristics that make AI agents different from previous IT artifacts. The aim of this dissertation, therefore, is to identify the AI-specific characteristics and behaviors that hinder and contribute to trust and distrust, thereby shaping users’ behavior in human-AI interaction. Using a custom-developed conversational AI agent, this dissertation extends the human-AI literature by introducing and empirically testing six new constructs, namely, AI indeterminacy, task fulfillment indeterminacy, verbal indeterminacy, AI inheritability, AI trainability, and AI freewill

    Almost human, but not really

    Get PDF
    Technologies become increasingly present in people’s daily lives and oftentimes adopt the role of social counterparts. People have conversations with their smart voice assistants and social robots assist with the household or even look after their users’ mental and physical health. Thus, the human-technology relationship often resembles interpersonal relationships in several ways. While research has implied that the human-technology relationship can adopt a social character, it needs to be clarified in what ways and regarding which variables the human-technology relationship and interpersonal relationships are comparable. Moreover, the question arises to what extent interaction with technology can address users’ social needs similar to a human counterpart and therefore possibly even affect interpersonal interaction. In this, the role of technology anthropomorphism, that is, the attribution of humanlike qualities to non-human agents or objects needs to be specified. This thesis is dedicated to the relevance of the human-technology relationship for interpersonal relationships with a focus on social needs. In the frame of this overarching research aim, the studies included in this thesis focus on the dynamics of the human-technology relationship and their comparability to interpersonal relationships (RQ1), the potential of human-technology interaction to address users’ social needs or substitute their fulfillment through interpersonal interaction (RQ2) as well as the role of technology anthropomorphism regarding these relationships (RQ3). First, focusing on trust, which is integral for the relationship with a technology that is experienced as a counterpart, two consecutive experimental studies (study 1.1/1.2) were conducted. Based on a human-robot interaction, they explored trust development in the human-technology relationship as well as to what extent determinants known to affect interpersonal trust development are transferable. Moreover, they focused on the role of technology anthropomorphism in this relationship. In this, a positive effect of technology competence, that is, its ability to achieve intended goals (study 1.1), as well as technology warmth, that is, its adherence to the same intentions and interests as the trustor (study 1.2), on trust in the technology emerged. Thus, relevant determinants for trust development in the human-technology relationship were highlighted, also implying a transferability of essential dynamics of trust development from interpersonal relationships. Furthermore, perceived technology anthropomorphism appeared to affect the positive interrelation of perceived technology competence and trust in the technology (study 1.1) as well as the interrelation of perceived technology warmth and trust in the technology (study 1.2). These insights support a relevance of perceived technology anthropomorphism in trust dynamics within the human-technology relationship, but also in the transferability of corresponding dynamics from interpersonal relationships. Similarly, in another study (study 2) the transferability of dynamics was explored for the variable of social connectedness, also key for relationship development and potentially relevant for the effect of interaction with technology on users’ social needs. Therefore, a two-week human-technology interaction with a conversational chatbot was investigated. In this, possibly relevant characteristics of the technology, such as its perception as anthropomorphic or socially present, and the user, for example, the individual tendency to anthropomorphize or the individual need to belong, were focused. Moreover, a possible effect of social connectedness to the technology on the desire to socialize with other humans was explored. As findings showed that duration and intensity of participants' interaction with the technology throughout the two-week study-period positively predicted felt social connectedness to the technology, similarities to dynamics of interpersonal relationship development were highlighted. Furthermore, the relevance of technology anthropomorphism in the development of a human-technology relationship as well as its comparability to dynamics of interpersonal relationships was underlined. Namely, the more intense individuals interacted with the technology, the more anthropomorphic they perceived it, and therefore felt more socially connected to it. Similarly, the longer and more intense individuals interacted with the technology, the more socially present they perceived it, and in turn felt more socially connected to it. While contrary to expectations, no interrelation between the felt social connectedness to the technology and the desire to socialize with other humans emerged, this relationship was explored further within studies 3.1, 3.2 and 4. Two consecutive experimental studies (study 3.1/3.2) explored the potential of anthropomorphic technologies to fulfill social needs as well as how individually perceived anthropomorphism correlates to these needs. While in both studies social exclusion and technology anthropomorphism were manipulated, we applied a different manipulation of anthropomorphism for each study. Whereas in one study (study 3.1) participants answered anthropomorphic (vs. non-anthropomorphic) questions regarding their own smartphone, in the other study (study 3.2) they were confronted with smartphone designs with anthropomorphic (vs. non-anthropomorphic) design cues. In both studies, no effects of anthropomorphism and social exclusion on behavioral intention or willingness to socialize were found. Yet, study 3.1 showed a positive correlation between willingness to socialize and perceived technology anthropomorphism. Results of study 3.2 further supported this relationship and additionally showed that this relationship was particularly strong for individuals with a high tendency to anthropomorphize, when the technology came with anthropomorphic design cues regarding its appearance. Thus, findings imply a relationship between social needs and anthropomorphism and further hint at a relevance of individual and contextual strengthening factors. To complement these findings and foster a deeper understanding of the human-technology relationship as well as its potential to address users’ social needs, a qualitative interview study was conducted (study 4). Findings highlight a potential of anthropomorphic technologies to address users’ social needs in certain ways, but also underline essential differences between the quality of human-technology interaction and interpersonal interaction. Examples are the technology’s missing reactions in interaction with the user on a content, physical, and emotional level as well as the absence of satisfaction of users’ social needs through interaction with technology. Additionally, insights hint at a social desirability bias, as interaction with technology that resembles interpersonal interaction appears to often be subject to rather negative reactions by third parties. After an overview of the empirical studies included in this thesis and their brief summaries, their research contribution is discussed. This is followed by an elaboration of overall theoretical and practical implications of this thesis. Theoretical implications focus on how this work contributes to but also extends theoretical and empirical work in the frame of the “computers are social actors” paradigm and particularly highlights the role of technology anthropomorphism as a phenomenon in this regard. Beyond the exploration of a social character of the human-technology relationship, this thesis offers insights on the potential of the human-technology relationship to address users’ social needs to an extent that interpersonal relationships can be affected. Implications for practitioners involve insights on design examples to support the development of essential determinants of the human-technology relationship. They also offer a more abstract invitation to reflect on the design and application contexts of technologies to foster a responsible handling with technology in peoples’ daily lives. Finally, the thesis concludes with a discussion of general limitations and directions for future research.Technologien werden zunehmend prĂ€sent im Alltag der Menschen und nehmen hĂ€ufig die Rolle eines sozialen GegenĂŒbers ein. Menschen unterhalten sich mit ihren technischen Sprachassistenten und soziale Roboter unterstĂŒtzen im Haushalt und kĂŒmmern sich sogar um das psychische und physische Wohlbefinden ihrer Nutzer und Nutzerinnen. Entsprechend Ă€hnelt die Mensch-Technik Beziehung in verschiedenen Aspekten hĂ€ufig zwischenmenschlichen Beziehungen. Im Einklang damit spricht bisherige Forschung dafĂŒr, dass die Mensch-Technik Beziehung einen sozialen Charakter annehmen kann. Es gilt jedoch zu erforschen, auf welche Art und Weise und in Bezug auf welche Variablen die Mensch-Technik Beziehung und zwischenmenschliche Beziehungen vergleichbar sind. DarĂŒber hinaus stellt sich die Frage, inwiefern durch Interaktion mit Technik soziale BedĂŒrfnisse der Nutzer und Nutzerinnen auf eine Ă€hnliche Art und Weise adressiert werden können wie durch die Interaktion mit einem anderen Menschen, und infolgedessen möglicherweise ein Effekt auf zwischenmenschliche Interaktion entstehen kann. Dabei gilt es zu spezifizieren, welche Rolle Anthropomorphismus, das heißt, die Zuschreibung menschenĂ€hnlicher QualitĂ€ten in Bezug auf nicht-menschliche Agenten oder Objekte, spielt. Die vorliegende Dissertation widmet sich der Relevanz der Mensch-Technik Beziehung fĂŒr zwischenmenschliche Beziehungen, mit einem Fokus auf soziale BedĂŒrfnisse. Im Rahmen dieses ĂŒbergreifenden Forschungsvorhabens erforschen die Studien dieser Arbeit die Dynamiken der Mensch-Technik Beziehung und deren Vergleichbarkeit mit zwischenmenschlichen Beziehungen (Forschungsfrage 1), das Potential der Mensch-Technik Interaktion, soziale BedĂŒrfnisse der Nutzer und Nutzerinnen zu adressieren oder die Befriedigung dieser durch zwischenmenschliche Interaktion zu substituieren (Forschungsfrage 2) sowie die Rolle des Anthropomorphismus von Technik in Bezug auf diese ZusammenhĂ€nge (Forschungsfrage 3). In zwei konsekutiven, experimentellen Studien (Studie 1.1/1.2) wurde Vertrauen in der Mensch-Technik Beziehung als essentielle Grundlage einer Beziehung zu einer Technik, die als GegenĂŒber wahrgenommen wird, fokussiert. Mittels einer Mensch-Roboter Interaktion wurde die Entwicklung von Vertrauen in der Mensch-Technik Beziehung untersucht. Dabei wurde erforscht, inwiefern Determinanten, welche die Entwicklung von zwischenmenschlichem Vertrauen beeinflussen können, auf die Mensch-Technik Beziehung ĂŒbertragbar sind. DarĂŒber hinaus wurde die Rolle des Anthropomorphismus von Technik untersucht. Es zeigte sich ein positiver Effekt der Kompetenz der Technik, das heißt der FĂ€higkeit, beabsichtigte Ziele zu erreichen (Studie 1.1), und der WĂ€rme der Technik, das heißt des Verfolgens der gleichen Intentionen und Interessen wie jeweilige Nutzer und Nutzerinnen (Studie 1.2) auf das Vertrauen in die Technik. Entsprechend wurden relevante Determinanten der Vertrauensentwicklung in der Mensch-Technik Beziehung beleuchtet und eine Übertragbarkeit essentieller Dynamiken der Vertrauensentwicklung aus zwischenmenschlichen Beziehungen aufgezeigt. Außerdem zeigte sich ein Effekt des wahrgenommenen Anthropomorphismus der Technik auf die positiven ZusammenhĂ€nge zwischen wahrgenommener Kompetenz und Vertrauen in die Technik (Studie 1.1) sowie wahrgenommener WĂ€rme und Vertrauen in die Technik (Studie 1.2). Diese Einsichten unterstĂŒtzen die Relevanz des wahrgenommenen Anthropomorphismus der Technik hinsichtlich der Vertrauensdynamiken in der Mensch-Technik Beziehung sowie der Übertragbarkeit entsprechender Dynamiken aus zwischenmenschlichen Beziehungen. In einer weiteren Studie (Studie 2) wurde die Übertragbarkeit der Dynamiken von zwischenmenschlichen Beziehungen auf die Mensch-Technik Beziehung in Bezug auf die Variable der sozialen Verbundenheit untersucht. Diese kann ebenso relevant fĂŒr die Beziehungsentwicklung und einen möglichen Effekt von Interaktion mit Technik auf soziale BedĂŒrfnisse der Nutzer und Nutzerinnen sein. HierfĂŒr wurde eine zweiwöchige Mensch-Technik Interaktion mit einem dialogfĂ€higen Chatbot exploriert. Dabei wurden potentiell relevante Charakteristika der Technik, beispielsweise, ihre Wahrnehmung als anthropomorph oder sozial prĂ€sent sowie der Nutzer und Nutzerinnen, beispielsweise, die individuelle Tendenz zu anthropomorphisieren sowie das individuelle BedĂŒrfnis nach Zugehörigkeit, fokussiert und ein möglicher Effekt der sozialen Verbundenheit zur Technik auf den Wunsch mit anderen Menschen zu sozialisieren untersucht. Die Ergebnisse zeigten, dass Interaktionsdauer und InteraktionsintensitĂ€t mit der Technik ĂŒber die zweiwöchige Studiendauer hinweg die empfundene soziale Verbundenheit zu dieser positiv voraussagten. Entsprechend wurden Ähnlichkeiten der Dynamiken der Beziehungsentwicklung zu zwischenmenschlichen Beziehungen hervorgehoben. Des Weiteren wurde die Relevanz von Anthropomorphismus der Technik fĂŒr die Entwicklung einer Mensch-Technik Beziehung und die Vergleichbarkeit mit Dynamiken zwischenmenschlicher Beziehungen unterstrichen. Denn je intensiver Menschen mit der Technik interagierten, umso menschenĂ€hnlicher nahmen sie diese wahr und fĂŒhlten sich infolgedessen umso stĂ€rker sozial verbunden mit ihr. Ebenso, je lĂ€nger und intensiver Menschen mit der Technik interagierten, umso sozial prĂ€senter nahmen sie diese wahr und fĂŒhlten sich infolgedessen umso stĂ€rker sozial verbunden mit ihr. WĂ€hrend sich wider Erwarten kein Zusammenhang zwischen der sozialen Verbundenheit zur Technik und dem Wunsch, mit anderen Menschen zu sozialisieren, zeigte, wurde dieser Zusammenhang im Rahmen der Studien 3.1, 3.2 und 4 nĂ€her exploriert. Im Rahmen zweier konsekutiver, experimenteller Studien (Studie 3.1/3.2) wurde das Potential von anthropomorphen Technologien, soziale BedĂŒrfnisse zu erfĂŒllen untersucht sowie der Frage nachgegangen, inwiefern individuell wahrgenommener Anthropomorphismus mit sozialen BedĂŒrfnissen korreliert. In beiden Studien wurden soziale Exklusion und Anthropomorphismus der Technik manipuliert, Anthropomorphismus jedoch in den Studien jeweils unterschiedlich. In einer Studie (Studie 3.1) beantworteten Versuchspersonen anthropomorphe (vs. nicht anthropomorphe) Fragen ĂŒber ihr eigenes Smartphone. In der anderen Studie (Studie 3.2) wurden sie mit Smartphone-Designs mit anthropomorphen (vs. nicht anthropomorphen) Merkmalen konfrontiert. In beiden Studien zeigten sich keine Effekte von Anthropomorphismus und sozialer Exklusion auf die verhaltensbezogene Intention oder die Bereitschaft mit anderen zu sozialisieren. Jedoch zeigte sich in Studie 3.1 ĂŒbergreifend eine positive Korrelation zwischen der Bereitschaft mit anderen Menschen zu sozialisieren und dem wahrgenommenen Anthropomorphismus der Technik. Ergebnisse der Studie 3.2 unterstĂŒtzten diesen Befund und implizierten zusĂ€tzlich, dass dieser Zusammenhang fĂŒr Menschen, die eine hohe Tendenz zu anthropomorphisieren aufwiesen und gleichzeitig mit einer Technik mit anthropomorpher Gestaltung in Bezug auf deren Erscheinung konfrontiert waren, besonders ausgeprĂ€gt war. Insgesamt sprechen diese Einsichten fĂŒr einen Zusammenhang zwischen sozialen BedĂŒrfnissen und Anthropomorphismus und deuten auf eine Relevanz von individuellen und kontextuellen Faktoren hin, die verstĂ€rkend wirken können. Als ErgĂ€nzung der erlĂ€uterten Befunde sowie zur UnterstĂŒtzung eines tiefgrĂŒndigen VerstĂ€ndnisses der Mensch-Technik Beziehung und des Potentials dieser, soziale BedĂŒrfnisse der Nutzer und Nutzerinnen anzusprechen, wurde eine qualitative Interviewstudie durchgefĂŒhrt (Studie 4). Die gewonnenen Einsichten unterstĂŒtzen das Potential anthropomorpher Technik, soziale BedĂŒrfnisse der Nutzer und Nutzerinnen auf bestimmte Wege anzusprechen, aber zeigten auch essentielle Unterschiede in der QualitĂ€t der Mensch-Technik und zwischenmenschlichen Interaktion. Zu Beispielen gehören fehlende Reaktionen der Technik auf Nutzer und Nutzerinnen auf einer inhaltlichen, emotionalen und physischen Ebene sowie das Ausbleiben der Befriedigung sozialer BedĂŒrfnisse durch die Interaktion mit Technik. ZusĂ€tzlich weisen die Studieneinsichten auf einen Effekt sozialer ErwĂŒnschtheit diesbezĂŒglich hin, zumal die Interaktion mit Technik, die zwischenmenschlicher Interaktion Ă€hnelt, hĂ€ufig mit eher negativen Reaktionen Dritter assoziiert wurde. Im Anschluss an einen Überblick und die kurze Zusammenfassung der empirischen Studien dieser Dissertation wird deren Beitrag in Hinblick auf bisherige Forschung diskutiert. Darauf folgt eine ErlĂ€uterung ĂŒbergreifender theoretischer und praktischer Implikationen dieser Arbeit. Theoretische Implikationen fokussieren hauptsĂ€chlich wie die vorliegende Dissertation das VerstĂ€ndnis theoretischer und empirischer Arbeiten im Rahmen des „computers are social actors“ Paradigmas vertieft und zusĂ€tzlich erweitert. DarĂŒber hinaus wird die diesbezĂŒgliche Rolle von Anthropomorphismus der Technik als PhĂ€nomen beleuchtet. Über die Exploration des sozialen Charakters der Mensch-Technik Beziehung hinaus, liefert die vorliegende Arbeit Einsichten zum Potential der Mensch-Technik Beziehung soziale BedĂŒrfnisse der Nutzer und Nutzerinnen insofern zu adressieren, dass Konsequenzen fĂŒr zwischenmenschliche Beziehungen entstehen können. Implikationen fĂŒr die Praxis beziehen sich auf Einsichten in Hinblick auf Design-Beispiele, welche die Entwicklung von Faktoren, die zentral fĂŒr die Mensch-Technik Beziehung sein können, unterstĂŒtzen können. DarĂŒber hinaus laden die Implikationen ein, ĂŒber das Design und die Anwendungskontexte von Technologien zu reflektieren, um einen verantwortungsvollen Umgang mit Technologien im Alltag der Menschen zu fördern. Abschließend werden allgemeine Limitationen der vorliegenden Arbeit diskutiert und mögliche Richtungen fĂŒr zukĂŒnftige Forschung aufgezeigt

    Neuroscience, Artificial Intelligence, and the Case Against Solitary Confinement

    Get PDF
    Prolonged solitary confinement remains in widespread use in the United States despite many legal challenges. A difficulty when making the legal case against solitary confinement is proffering sufficiently systematic and precise evidence of the detrimental effects of the practice on inmates\u27 mental health. Given this need for further evidence, this Article explores how neuroscience and artificial intelligence (AI) might provide new evidence of the effects of solitary confinement on the human brain. This Article argues that both neuroscience and AI are promising in their potential ability to present courts with new types of evidence on the effects of solitary confinement on inmates\u27 brain circuitry. But at present, neither field has collected the type of evidence that is likely to tip the scales against solitary confinement and end the practice. This Article concludes that ending the entrenched practice of solitary confinement will likely require both traditional and novel forms of evidence. In exploring the potential effects of neuroscientific evidence on support for solitary confinement, the Article reports results from an Associate Professor of Law and McKnight Presidential Fellow, University original online experiment with a group of 250 ideologically conservative participants. The analysis finds that the introduction of brain injury reduced conservatives\u27 support for solitary confinement but not to the extent that is likely to make a policy impact. The Article argues that future, more individualized brain evidence may be of greater use, but at present neuroscience is limited in its ability to systematically measure the brain changes that inmates experience in solitary confinement. This Article then turns to AI and argues that it could be developed to provide litigators and inmates with the ability to more effectively document the detrimental effects of solitary confinement. Looking to the future, the Article lays out a vision for an AI system called Helios, named after the Homeric sun god believed to see and hear everything. The Article envisions Helios as a self-learning AI system with a mission to help inmates and their attorneys gather more systematic evidence of the effects of solitary confinement on inmate health. Helios is also a platform on which additional inmate services might one day be provided. The Article describes how Helios must be carefully designed, with particular attention given to privacy concerns. This Article is organized in seven parts. Part I describes the historical and contemporary use of solitary confinement in the United States, highlighting the known effects of solitary confinement on inmates. Part II summarizes recent constitutional challenges to the practice of solitary confinement. Part III explores the potential for integrating neuroscientific evidence into these legal challenges to solitary confinement. Part IV discusses a new online experiment to explore whether neuroscience might change public opinion on solitary confinement. In Part V, the Article transitions to a consideration of AI. The Article proposes a self-learning system, Helios, and describes how the system would operate. Part VI turns to a series of challenging ethical and legal questions about the design and implementation of Helios. Part VII briefly concludes
    corecore