574,252 research outputs found

    Digital Supply Chains and the Human Factor—A Structured Synopsis

    Get PDF
    Digital developments and changes in the production, supply chain and logistics sector as well as specific concepts like automation and Industry 4.0 or the Internet of Things are omnipresent. Especially the human role in such settings experiences important changes, which has not been adequately addressed in research yet. This introduction chapter contains an overview of elements encountered in digitalization processes in order to ensure sustainable work environments and efficient Human-Computer Interaction settings for the benefit of workers and organizations. Thus, it is the aim of this chapter to provide a structured synopsis to consider the human factor in analyzing digital work processes. This synopsis is aligned with typical workflow developments in digitalization projects and can be transferred to different work settings in supply chains. Finally, we outline the chapter structure of this book within four thematic sections in order to provide a joint storyline on investigating the human factor in digital supply chains

    Interacting with Presence. HCI and the Sense of Presence in Computer-mediated Environments

    Get PDF
    The experience of using and interacting with the newest Virtual Reality and computing technologies is profoundly affected by the extent to which we feel ourselves to be really ‘present’ in computer-generated and -mediated augmented worlds. This feeling of 'Presence’, of “being inside the mediated world”, is key to understanding developments in applications such as interactive entertainment, gaming, psychotherapy, education, scientific visualisation, sports training and rehabilitation, and many more. This edited volume, featuring contributions from internationally renowned scholars, provides a comprehensive introduction to and overview of the topic of mediated presence - or ‘tele-presence’ - and of the emerging field of presence research. It is intended for researchers and graduate students in human-computer interaction, cognitive science, psychology, cyberpsychology and computer science, as well as for experienced professionals from the ICT industry. The editors are all well-known professional researchers in the field: Professor Giuseppe Riva from the Catholic University of Milan, Italy; Professor John Waterworth from UmeĂ„ University, Sweden; Dianne Murray, an HCI Consultant and editor of the journal “Interacting with Computers”

    Economics models of interaction : a tutorial on modeling interaction using economics

    Get PDF
    This chapter provides a tutorial on how economics can be used to model the interaction between users and systems. Economic theory provides an intuitive and natural way to model Human-Computer In- teraction which enables the prediction and explanation of user behaviour. A central tenet of the approach is the utility maximisation paradigm where it is assumed that users seek to maximise their profit/benefit subject to budget and other constraints when interacting with a system. By using such models it is possible to reason about user behaviour and make predictions about how changes to the interface or the users interactions will affect performance and behaviour. In this chapter, we describe and develop several economic models relating to how users search for information. While the examples are specific to Information Seeking and Retrieval, the techniques employed can be applied more generally to other human-computer interaction scenarios. Therefore, the goal of this chapter is to provide an introduction and overview of how to build economic models of human-computer interaction that generate testable hypotheses regarding user behaviour which can be used to guide design and inform experimentation

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Software psychology and the computerisation of the weighted application blank : a thesis presented in partial fulfillment of the requirements for the degree of Master of Arts in Psychology at Massey University

    Get PDF
    This study investigated the use of a Weighted Application Blank (WAB) for selecting candidates likely to pass the first year of a comprehensive nursing course. A subject pool of 415 comprehensive nursing course applicants was drawn from 1980 to 1985 first year Polytechnic classes. A discriminant analysis on the application form responses made by these subjects was performed. Computer software was then developed incorporating results from Human Factors research. The software aimed to computerise the WAB method of classifying applicants following principles of software psychology. A group of 50 computer naive subjects participated in an experimental evaluation of the software. Five subjects took part in initial pilot study trials of the software. The remaining 45 subjects' were divided into three equally sized groups. The subjects task was to enter eight sets of nursing course application form data. The "computerised" group received instructions on how to do this from the screen, the "written" group from a manual and the "verbal" group verbally from the experimenter. Time taken to complete the task and the number of errors made were recorded. Three ANOVAs were performed to establish if group exerted an influence on trial times or error rates. In addition, applicants were required to complete two questionnaires. The first prior to the experimental trials and the second following them. Results indicated that group influenced time taken on the task (F(1,294) = 7.43, p<.001). Group did not exert an influence on errors made on each question (F(32,672) = 1.022, p>.05). The interaction between errors made on each application form and group was significant (F(14,294) = 2.809,p.05). Responses to the questionnaires were evaluated and an assessment was made of the responses. It was concluded that the fields of human­ computer interface design and personnel selection had been successfully combined. Leading to the expectation that an area of great research potential had been opened up

    Reviews

    Get PDF
    Integrating Information Technology into Education edited by Deryn Watson and David Tinsley, London, Chapman & Hall, 1995, ISBN: 0–412–62250–5, 316 pages
    • 

    corecore