165 research outputs found

    The effects of user assistance systems on user perception and behavior

    Get PDF
    The rapid development of information technology (IT) is changing how people approach and interact with IT systems (Maedche et al. 2016). IT systems can increasingly support people in performing ever more complex tasks (Vtyurina and Fourney 2018). However, people's cognitive abilities have not evolved as quickly as technology (Maedche et al. 2016). Thus, different external factors (e.g., complexity or uncertainty) and internal conditions (e.g., cognitive load or stress) reduce decision quality (Acciarini et al. 2021; Caputo 2013; Hilbert 2012). User-assistance systems (UASs) can help to compensate for human weaknesses and cope with new challenges. UASs aim to improve the user's cognition and capabilities, benefiting individuals, organizations, and society. To achieve this goal, UASs collect, prepare, aggregate, analyze information, and communicate results according to user preferences (Maedche et al. 2019). This support can relieve users and improve the quality of decision-making. Using UASs offers many benefits but requires successful interaction between the user and the UAS. However, this interaction introduces social and technical challenges, such as loss of control or reduced explainability, which can affect user trust and willingness to use the UAS (Maedche et al. 2019). To realize the benefits, UASs must be developed based on an understanding and incorporation of users' needs. Users and UASs are part of a socio-technical system to complete a specific task (Maedche et al. 2019). To create a benefit from the interaction, it is necessary to understand the interaction within the socio-technical system, i.e., the interaction between the user, UAS, and task, and to align the different components. For this reason, this dissertation aims to extend the existing knowledge on UAS design by better understanding the effects and mechanisms during the interaction between UASs and users in different application contexts. Therefore, theory and findings from different disciplines are combined and new theoretical knowledge is derived. In addition, data is collected and analyzed to validate the new theoretical knowledge empirically. The findings can be used to reduce adaptation barriers and realize a positive outcome. Overall this dissertation addresses the four classes of UASs presented by Maedche et al. (2016): basic UASs, interactive UASs, intelligent UASs, and anticipating UASs. First, this dissertation contributes to understanding how users interact with basic UASs. Basic UASs do not process contextual information and interact little with the user (Maedche et al. 2016). This behavior makes basic UASs suitable for application contexts, such as social media, where little interaction is desired. Social media is primarily used for entertainment and focuses on content consumption (Moravec et al. 2018). As a result, social media has become an essential source of news but also a target for fake news, with negative consequences for individuals and society (Clarke et al. 2021; Laato et al. 2020). Thus, this thesis presents two approaches to how basic UASs can be used to reduce the negative influence of fake news. Firstly, basic UASs can provide interventions by warning users of questionable content and providing verified information but the order in which the intervention elements are displayed influences the fake news perception. The intervention elements should be displayed after the fake news story to achieve an efficient intervention. Secondly, basic UASs can provide social norms to motivate users to report fake news and thereby stop the spread of fake news. However, social norms should be used carefully, as they can backfire and reduce the willingness to report fake news. Second, this dissertation contributes to understanding how users interact with interactive UASs. Interactive UASs incorporate limited information from the application context but focus on close interaction with the user to achieve a specific goal or behavior (Maedche et al. 2016). Typical goals include more physical activity, a healthier diet, and less tobacco and alcohol consumption to prevent disease and premature death (World Health Organization 2020). To increase goal achievement, previous researchers often utilize digital human representations (DHRs) such as avatars and embodied agents to form a socio-technical relationship between the user and the interactive UAS (Kim and Sundar 2012a; Pfeuffer et al. 2019). However, understanding how the design features of an interactive UAS affect the interaction with the user is crucial, as each design feature has a distinct impact on the user's perception. Based on existing knowledge, this thesis highlights the most widely used design features and analyzes their effects on behavior. The findings reveal important implications for future interactive UAS design. Third, this dissertation contributes to understanding how users interact with intelligent UASs. Intelligent UASs prioritize processing user and contextual information to adapt to the user's needs rather than focusing on an intensive interaction with the user (Maedche et al. 2016). Thus, intelligent UASs with emotional intelligence can provide people with task-oriented and emotional support, making them ideal for situations where interpersonal relationships are neglected, such as crowd working. Crowd workers frequently work independently without any significant interactions with other people (Jäger et al. 2019). In crowd work environments, traditional leader-employee relationships are usually not established, which can have a negative impact on employee motivation and performance (Cavazotte et al. 2012). Thus, this thesis examines the impact of an intelligent UAS with leadership and emotional capabilities on employee performance and enjoyment. The leadership capabilities of the intelligent UAS lead to an increase in enjoyment but a decrease in performance. The emotional capabilities of the intelligent UAS reduce the stimulating effect of leadership characteristics. Fourth, this dissertation contributes to understanding how users interact with anticipating UASs. Anticipating UASs are intelligent and interactive, providing users with task-related and emotional stimuli (Maedche et al. 2016). They also have advanced communication interfaces and can adapt to current situations and predict future events (Knote et al. 2018). Because of these advanced capabilities anticipating UASs enable collaborative work settings and often use anthropomorphic design cues to make the interaction more intuitive and comfortable (André et al. 2019). However, these anthropomorphic design cues can also raise expectations too high, leading to disappointment and rejection if they are not met (Bartneck et al. 2009; Mori 1970). To create a successful collaborative relationship between anticipating UASs and users, it is important to understand the impact of anthropomorphic design cues on the interaction and decision-making processes. This dissertation presents a theoretical model that explains the interaction between anthropomorphic anticipating UASs and users and an experimental procedure for empirical evaluation. The experiment design lays the groundwork for empirically testing the theoretical model in future research. To sum up, this dissertation contributes to information systems knowledge by improving understanding of the interaction between UASs and users in different application contexts. It develops new theoretical knowledge based on previous research and empirically evaluates user behavior to explain and predict it. In addition, this dissertation generates new knowledge by prototypically developing UASs and provides new insights for different classes of UASs. These insights can be used by researchers and practitioners to design more user-centric UASs and realize their potential benefits

    Bringing Human Robot Interaction towards _Trust and Social Engineering

    Get PDF
    Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes

    Designing for behavioural change: reducing the social impacts of product use through design

    Get PDF
    This thesis investigates the feasibility of applying design-led approaches to influence user behaviour to reduce the negative social impacts of products during use. A review of the literature revealed a distinct lack of design-led research in this area. Three promising approaches from other disciplines, however, were found; ecofeedback, behaviour steering and intelligence. The majority of product examples identified did not use a singular approach, but combined two or more approaches. Most of the examples were concepts and focused on the end result. Few commented on the research and development processes undertaken to generate the final design. These limitations reinforced the need for case studies detailing these processes. To this end, two design studies were carried out; a preliminary study using a range of products and a further, more in-depth study on the use of mobile phones. The results of these studies led to the development of a framework of attributes for 'behaviour changing' devices. In response to these findings, two design resources were developed; a detailed design project to reduce the social impacts of mobile phone use in public and a short film on texting whilst on the move. Evaluation by design professionals provided analysis of the effectiveness of these resources and wider reflections on designer's perceived responsibilities for use and the ethics of designing for behavioural change. Collectively, the findings indicated that resources for designing behavioural change should; be explorative not prescriptive, focus on problem solving, be tailored to meet the needs of the intended recipient and ideally be applied in the early 'ideation' stages of the design process. Additionally, the findings indicated that designer's involvement in, and responsibility for, lifecycle impacts must be extended beyond point-of-purchase. Designers, however, are reportedly often unable to influence product development at a strategic level. Prior work, therefore, is needed to engage those at a senior level. Furthermore, the findings strongly indicate that 'behaviour changing' devices must be prototyped and subjected to rigorous consumer testing not only to establish their effectiveness but also to determine their acceptability

    MODELING THE CONSUMER ACCEPTANCE OF RETAIL SERVICE ROBOTS

    Get PDF
    This study uses the Computers Are Social Actors (CASA) and domestication theories as the underlying framework of an acceptance model of retail service robots (RSRs). The model illustrates the relationships among facilitators, attitudes toward Human-Robot Interaction (HRI), anxiety toward robots, anticipated service quality, and the acceptance of RSRs. Specifically, the researcher investigates the extent to which the facilitators of usefulness, social capability, the appearance of RSRs, and the attitudes toward HRI affect acceptance and increase the anticipation of service quality. The researcher also tests the inhibiting role of pre-existing anxiety toward robots on the relationship between these facilitators and attitudes toward HRI. The study uses four methodological strategies: (1) incorporating a focus group and personal interviews, (2) using a presentation method of video clip stimuli, (3) empirical data collection and multigroup SEM analyses, and (4) applying three key product categories for the model’s generalization— fashion, technology (mobile phone), and food service (restaurant). The researcher conducts two pretests to check the survey items and to select the video clips. The researcher conducts the main test using an online survey of US consumer panelists (n = 1424) at a marketing agency. The results show that usefulness, social capability, and the appearance of a RSR positively influence the attitudes toward HRI. The attitudes toward HRI predict greater anticipation of service quality and the acceptance of the RSRs. The expected quality of service tends to enhance the acceptance. The relationship between social capability and attitudes toward HRI is weaker when the anxiety toward robots is higher. However, when the anxiety is higher, the relationship between appearance and the attitudes toward HRI is stronger than those with low anxiety. This study contributes to the literature on the CASA and domestication theories and to the human-computer interaction that involves robots or artificial intelligence. By considering social capability, humanness, intelligence, and the appearance of robots, this model of RSR acceptance can provide new insights into the psychological, social, and behavioral principles that guide the commercialization of robots. Further, this acceptance model could help retailers and marketers formulate strategies for effective HRI and RSR adoption in their businesses

    Relational agents : effecting change through human-computer relationships

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.Includes bibliographical references (p. 205-219).What kinds of social relationships can people have with computers? Are there activities that computers can engage in that actively draw people into relationships with them? What are the potential benefits to the people who participate in these human-computer relationships? To address these questions this work introduces a theory of Relational Agents, which are computational artifacts designed to build and maintain long-term, social-emotional relationships with their users. These can be purely software humanoid animated agents--as developed in this work--but they can also be non-humanoid or embodied in various physical forms, from robots, to pets, to jewelry, clothing, hand-helds, and other interactive devices. Central to the notion of relationship is that it is a persistent construct, spanning multiple interactions; thus, Relational Agents are explicitly designed to remember past history and manage future expectations in their interactions with users. Finally, relationships are fundamentally social and emotional, and detailed knowledge of human social psychology--with a particular emphasis on the role of affect--must be incorporated into these agents if they are to effectively leverage the mechanisms of human social cognition in order to build relationships in the most natural manner possible. People build relationships primarily through the use of language, and primarily within the context of face-to-face conversation. Embodied Conversational Agents--anthropomorphic computer characters that emulate the experience of face-to-face conversation--thus provide the substrate for this work, and so the relational activities provided by the theory will primarily be specific types of verbal and nonverbal conversational behaviors used by people to negotiate and maintain relationships.(cont.) This work also provides an analysis of the types of applications in which having a human-computer relationship is advantageous to the human participant. In addition to applications in which the relationship is an end in itself (e.g., in entertainment systems), human-computer relationships are important in tasks in which the human is attempting to undergo some change in behavior or cognitive or emotional state. One such application is explored here: a system for assisting the user through a month-long health behavior change program in the area of exercise adoption. This application involves the research, design and implementation of relational agents as well as empirical evaluation of their ability to build relationships and effect change over a series of interactions with users.by Timothy Wallace Bickmore.Ph.D
    • …
    corecore