189,592 research outputs found
Human-Centered Design for Individual and Social Well-being: Editorial Preface
As digital technology use becomes widespread, its unintended consequences ranging from personal health to societal righteousness are under more scrutiny. Increasingly, digital designers are accused of not being considerate enough of the depth of their creations, and their impacts on our well-being. In this special issue, we explore an alternative, genuinely human-centered approach to technology design focusing on well-being and making our interactions with digital technology more meaningful, purposeful, and sustainable. To this end, the editorial starts with a brief review of the history of research that led to the growing field of digital well-being. We then introduce the Digital Well-being Design Framework, which goes beyond the ego-centric approach in human-centered design, and is multi-layered with self (intrapersonal), social (interpersonal), and transcendent (extra-personal) levels. Similar topics in related AIS journals are summarized, followed by the application of our framework to introduce and position the papers in this special issue. Our special issue aims to bring the topic of digital well-being to the forefront of the information systems research community and launch a new era of genuinely human-centered design
Animal-centered design needs dignity: a critical essay on ACI's core concept
Despite a massive acceptance of 'animal-centered design' being at the very heart of Animal-Computer Interaction, exactly what it means to be animal-centered often remains vague. In this position paper, I question and critique what animal-centered design really means as it is used. I argue that even though the ACI manifesto and subsequent foundational works clearly set out a focus on animal user-centered design, much work since has adopted 'animal-centered' as being a synonym for 'animal user-centered'. However, I argue, the fundamental essence of ACI's intellectual origins in human-centered design's preoccupation with human values, and in turn, human dignity — which set it apart from mere user-centered design, is lost in such a straightforward adoption of the term. I then analyze what it might mean to actually adopt a value-driven approach akin to human-centered design for animal-centered design, and how this might force us to move beyond the typical welfarist position dominant across most of ACI. Rather than consider the prevention of unnecessary suffering as a central goal of technologies developed by ACI researchers, I argue that technologies that preserve animal dignity as a core value is a more appropriate understanding of the term 'animal-centered'
Human Factors Engineering Requirements for the International Space Station - Successes and Challenges
Advanced technology coupled with the desire to explore space has resulted in increasingly longer human space missions. Indeed, any exploration mission outside of Earth's neighborhood, in other words, beyond the moon, will necessarily be several months or even years. The International Space Station (ISS) serves as an important advancement toward executing a successful human space mission that is longer than a standard trip around the world or to the moon. The ISS, which is a permanently occupied microgravity research facility orbiting the earth, will support missions four to six months in duration. In planning for the ISS, the NASA developed an agency-wide set of human factors standards for the first time in a space exploration program. The Man-Systems Integration Standard (MSIS), NASA-STD-3000, a multi-volume set of guidelines for human-centered design in microgravity, was developed with the cooperation of human factors experts from various NASA centers, industry, academia, and other government agencies. The ISS program formed a human factors team analogous to any major engineering subsystem. This team develops and maintains the human factors requirements regarding end-to-end architecture design and performance, hardware and software design requirements, and test and verification requirements. It is also responsible for providing program integration across all of the larger scale elements, smaller scale hardware, and international partners
Evaluating Human-Language Model Interaction
Many real-world applications of language models (LMs), such as writing
assistance and code autocomplete, involve human-LM interaction. However, most
benchmarks are non-interactive in that a model produces output without human
involvement. To evaluate human-LM interaction, we develop a new framework,
Human-AI Language-based Interaction Evaluation (HALIE), that defines the
components of interactive systems and dimensions to consider when designing
evaluation metrics. Compared to standard, non-interactive evaluation, HALIE
captures (i) the interactive process, not only the final output; (ii) the
first-person subjective experience, not just a third-party assessment; and
(iii) notions of preference beyond quality (e.g., enjoyment and ownership). We
then design five tasks to cover different forms of interaction: social
dialogue, question answering, crossword puzzles, summarization, and metaphor
generation. With four state-of-the-art LMs (three variants of OpenAI's GPT-3
and AI21 Labs' Jurassic-1), we find that better non-interactive performance
does not always translate to better human-LM interaction. In particular, we
highlight three cases where the results from non-interactive and interactive
metrics diverge and underscore the importance of human-LM interaction for LM
evaluation.Comment: Authored by the Center for Research on Foundation Models (CRFM) at
the Stanford Institute for Human-Centered Artificial Intelligence (HAI
Life is a Lab: Developing a Communication Research Lab for Undergraduate and Graduate Education
Tips offered center on classroom discourse, curriculum choices, and potential assignments. In this article, we present tips for creating a thriving undergraduate and graduate communication research lab. Based on our experiences developing and co-directing the Communication and Social Robotics Labs (CSRLs), we offer 10 best practices for acquiring resources and recognition, building a strong lab community, and attaining faculty and student goals for scholarship and beyond. Our overarching approach is framed by Dewey’s (1916) pragmatist educational metaphysic, which stresses student- and subject-centered learning, enlarging experiences, and the co-construction of meaning and knowledge. Although our labs are focused on human-machine communication (HMC), the strategies we present can be applied to any number of research contexts for both undergraduate and graduate education
Interfaces of the Agriculture 4.0
The introduction of information technologies in the environmental field is impacting and changing even a traditional sector like agriculture. Nevertheless, Agriculture 4.0 and data-driven decisions should meet user
needs and expectations. The paper presents a broad theoretical overview, discussing both the strategic role of design applied to Agri-tech and the issue of User Interface and Interaction as enabling tools in the field. In
particular, the paper suggests to rethink the HCD approach, moving on a Human-Decentered Design approach that put together user-technology-environment and the importance of the role of calm technologies as a way
to place the farmer, not as a final target and passive spectator, but as an active part of the process to aim the process of mitigation, appropriation from a traditional cultivation method to the 4.0 one
Artificial Intelligence and Patient-Centered Decision-Making
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient
- …