24 research outputs found

    Supporting remote therapeutic interventions with voice assistant technology

    Get PDF
    Nowadays, digital personal assistants are incorporated in many devices. Smart TVs, smartphones and stand-alone voice assistants like Amazon Alexa allow owners to control their smart home systems, play music on command or lookup information on the internet via voice queries. Using custom skills from various third-party vendors, almost any company can have a skill supporting the needs of their customers or control their devices. Furthermore, therapeutic interventions represent a vital part of most therapies, but there are some underlying struggles during therapies for which therapists can utilize the support of smart mobile devices. As an extension of an already existing system called Albatros, its features have been converted into an custom Alexa skill called remote interventions. But voice assistants can do more than improving everyday life, like helping people during medical therapies. A vital part of such therapies are therapeutic interventions, but therapists often face struggles when monitoring a patients progress and results. To overcome this problem, an existing system called Albatros allows a therapist to review the patients status. As an extension to the existing Albatros system, its features have been incorporated into a custom Alexa skill called remote interventions. Aiming to contribute to the proof of concept, the objective of this thesis is to demonstrate the development process of a custom Alexa skill which implements the features of retrieving exercises, allowing patients to record feedback via a smart speaker which can then be accessed by the therapist. With the addition of a notification feature the system also supports patients in remembering how and when to do their exercises properly. Due to the proof of concept nature of the project, apart from the actual development process, an analysis of whether or not the ideas and features translate well into a voice driven platform is performed

    A Systematic Review of Ethical Concerns with Voice Assistants

    Full text link
    Siri's introduction in 2011 marked the beginning of a wave of domestic voice assistant releases, and this technology has since become commonplace in consumer devices such as smartphones and TVs. But as their presence expands there have also been a range of ethical concerns identified around the use of voice assistants, such as the privacy implications of having devices that are always recording and the ways that these devices are integrated into the existing social order of the home. This has created a burgeoning area of research across a range of fields including computer science, social science, and psychology. This paper takes stock of the foundations and frontiers of this work through a systematic literature review of 117 papers on ethical concerns with voice assistants. In addition to analysis of nine specific areas of concern, the review measures the distribution of methods and participant demographics across the literature. We show how some concerns, such as privacy, are operationalized to a much greater extent than others like accessibility, and how study participants are overwhelmingly drawn from a small handful of Western nations. In so doing we hope to provide an outline of the rich tapestry of work around these concerns and highlight areas where current research efforts are lacking

    Experiencing Voice-Activated Artificial Intelligence Assistants in the Home: A Phenomenological Approach

    Get PDF
    Voice-controlled artificial intelligence (AI) assistants, such as Amazon’s Alexa or Google’s Assistant, serve as the gateway to the Internet of Things and connected home, executing the commands of its users, providing information, entertainment, utility, and convenience while enabling consumers to bypass the advertising they would typically see on a screen. This “screen-less” communication presents significant challenges for brands used to “pushing” messages to audiences in exchange for the content they seek. It also raises questions about data collection, usage, and privacy. Brands need to understand how and why audiences engage with AI assistants, as well as the risks with these devices, in order to determine how to be relevant in a voice-powered world. Because there’s little published research, a phenomenological approach was used to explore the lived meaning and shared experience of having an AI assistant in the home. Three overarching types of experiences with Alexa were revealed: removing friction, enabling personalization, and extending self and enriching life. These experiences encapsulated two types of explicit and implicit goals satisfied through interaction with Alexa, those that related to “Helping do,” focused on functional elements or tasks that Alexa performed, and those related to “Helping become,” encapsulating the transformative results of experiences with Alexa enabling users to become better versions of themselves. This is the first qualitative study to explore the meaning of interacting with AI assistants, and establishes a much-needed foundation of consumer understanding, rooted in the words and perspectives of the audience themselves, on which to build future research. Advisor: Aleidine Moelle

    A Systematic Review of Ethical Concerns with Voice Assistants

    Get PDF

    PrivExtractor:Towards Redressing the Imbalance of Understanding Between Virtual Assistant Users and Vendors

    Get PDF
    The use of voice-controlled virtual assistants (VAs) is significant, and user numbers increase every year. Extensive use of VAs has provided the large, cash-rich technology companies who sell them with another way of consuming users' data, providing a lucrative revenue stream. Whilst these companies are legally obliged to treat users' information "fairly and responsibly,"artificial intelligence techniques used to process data have become incredibly sophisticated, leading to users' concerns that a lack of clarity is making it hard to understand the nature and scope of data collection and use.There has been little work undertaken on a self-contained user awareness tool targeting VAs. PrivExtractor, a novel web-based awareness dashboard for VA users, intends to redress this imbalance of understanding between the data "processors"and the user. It aims to achieve this using the four largest VA vendors as a case study and providing a comparison function that examines the four companies' privacy practices and their compliance with data protection law.As a result of this research, we conclude that the companies studied are largely compliant with the law, as expected. However, the user remains disadvantaged due to the ineffectiveness of current data regulation that does not oblige the companies to fully and transparently disclose how and when they use, share, or profit from the data. Furthermore, the software tool developed during the research is, we believe, the first that is capable of a comparative analysis of VA privacy with a visual demonstration to increase ease of understanding for the user

    A sociophonetic analysis of female-sounding virtual assistants

    Get PDF
    As conversational machines (e.g., Apple\u27s Siri and Amazon\u27s Alexa) are increasingly anthropomorphized by humans and viewed as active interlocutors, it raises questions about the social information indexed by machine voices. This thesis provides a preliminary exploration of the relationship between human sociophonetics, social expectations, and conversational machine voices. An in-depth literature review (a) explores human relationships with and expectations for real and movie robots, (b) discusses the rise of conversational machines, (c) assesses the history of how female human voices have been perceived, and (d) details social-indexical properties associated with F0, vowel formants (F1 and F2), -ING pronunciation, and /s/ center of gravity in human speech. With background context in place, Siri and Alexa\u27s voices were recorded reciting various sentences and passages and analyzed for each of the aforementioned vocal features. Results suggest that sociolinguistic data from studies on human voices could inform hypotheses about how users might characterize conversational machine voices and encourage further consideration of how human and machine sociophonetics might influence each other

    Investigating Personal Intelligent Agents in Everyday Life through a Behavioral Lens

    Full text link
    Personal intelligent agents (PIA), such as Apple’s Siri, Google Now, Facebook’s M, and Microsoft’s Cortana, are pervading our lives. These systems are taking the shape of a companion, and acting on our behalf to help us manage our everyday activities. The proliferation of these PIAs is largely due to their wide availability on mobile devices which themselves have become commonly available for billions of people. Our continuous interaction with these PIAs is impacting our sense of self, sense of being human, perception of technology, and relationships with others. The Information Systems (IS) literature on PIAs has been scarce. In this dissertation, we investigate the users’ relationship with PIAs in pre- and post-adoption contexts. We create and develop scales for two new constructs, perceived intelligence and perceived anthropomorphism, which are essential to investigate the holistic users’ experience with PIAs and similar systems. We also investigate perceptions of self-extension and possible antecedents of self-extension for the first time in IS. Additionally, we explore design issues with PIAs and examine voice and humor, which are independently present in currently available PIAs. Humor is a pervasive social phenomenon that shapes the dynamics of human interactions and is investigated for the first time in an IS experiment. We find that the current adoption and continuance of use models may not be sufficient to investigate the adoption and continuance of use of PIAs and similar systems since they do not capture the whole interaction between the user and the PIA. Our results underline the important role of the new perceptions, the utilitarian and hedonic aspects of use, and the cognitive and emotional trust in these social actors. Our findings highlight an astonishing change in the users’ perception of technology from being a tool distant from the self to a tool that they develop emotional connections with and consider part of their self-identity. This dissertation’s findings provide interesting theoretical and practical implications and stress a changing relationship between the user and the technology with this new wave of systems. Our research answers important questions in the context of PIAs’ adoption and continued used, contributes to various streams in the IS literature (adoption, continuance of use, trust, intelligence, anthropomorphism, dual-purpose IS, and self-extension) and creates new opportunities for future research
    corecore