1,367 research outputs found

    Calendar.help: Designing a Workflow-Based Scheduling Agent with Humans in the Loop

    Full text link
    Although information workers may complain about meetings, they are an essential part of their work life. Consequently, busy people spend a significant amount of time scheduling meetings. We present Calendar.help, a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant. Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise. Unusual scenarios fall back to a trained human assistant who executes them as unstructured macrotasks. We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments. Our findings provide insight into how complex information tasks can be broken down into repeatable components that can be executed efficiently to improve productivity.Comment: 10 page

    Role of individual differences in dialogue engineering for automated telephone services

    Get PDF

    TASK HANDOFF BETWEEN HUMANS AND AUTOMATION

    Get PDF
    The Department of Defense (DOD) seeks to incorporate human-automation teaming to decrease human operators’ cognitive workload, especially in the context of future vertical lift (FVL). Researchers created a “wizard of oz” study to observe human behavior changes as task difficulty and levels of automation increased. The platform used for the study was a firefighting strategy software game called C3Fire. Participants were paired with a confederate acting as an automated agent to observe the participant’s behavior in a human-automation team. The independent variables were automation level (within; low, medium, high) and queuing (between; uncued, cued). The dependent variables were the number of messages transmitted to the confederate, the number of tasks embedded in those messages (tasks handed off), and the participant’s self-reported cognitive workload score. The study results indicated that as the confederate increased its scripted level of automation, the number of tasks handed off to automation increased. However, the number of messages transmitted to automation and the subjective cognitive workload remained the same. The study’s findings suggest that while human operators were able to bundle tasks, cognitive workload remained relatively unchanged. The results imply that the automation level may have less impact on cognitive workload than anticipated.Major, United States ArmyCaptain, United States ArmyCaptain, United States ArmyCaptain, United States ArmyCaptain, United States ArmyApproved for public release. Distribution is unlimited

    It's Good to Talk: A Comparison of Using Voice Versus Screen-Based Interactions for Agent-Assisted Tasks

    Get PDF
    Voice assistants have become hugely popular in the home as domestic and entertainment devices. Recently, there has been a move towards developing them for work settings. For example, Alexa for Business and IBM Watson for Business were designed to improve productivity, by assisting with various tasks, such as scheduling meetings and taking minutes. However, this kind of assistance is largely limited to planning and managing user's work. How might they be developed to do more by way of empowering people at work? Our research is concerned with achieving this by developing an agent with the role of a facilitator that assists users during an ongoing task. Specifically, we were interested in whether the modality in which the agent interacts with users makes a difference: How does a voice versus screen-based agent interaction affect user behavior? We hypothesized that voice would be more immediate and emotive, resulting in more fluid conversations and interactions. Here, we describe a user study that compared the benefits of using voice versus screen-based interactions when interacting with a system incorporating an agent, involving pairs of participants doing an exploratory data analysis task that required them to make sense of a series of data visualizations. The findings from the study show marked differences between the two conditions, with voice resulting in more turn-taking in discussions, questions asked, more interactions with the system and a tendency towards more immediate, faster-paced discussions following agent prompts. We discuss the possible reasons for why talking and being prompted by a voice assistant may be preferable and more effective at mediating human-human conversations and we translate some of the key insights of this research into design implications

    Five-Factor Model as a Predictor for Spoken Dialog Systems

    Get PDF
    Human behavior varies widely as does the design of spoken dialog systems (SDS). The search for predictors to match a user’s preference and efficiency for a specific dialog interface type in an SDS was the focus of this research. By using personality as described by the Five-Factor Method (FFM) and the Wizard of Oz technique for delivering three system initiatives of the SDS, participants interacted with each of the SDS initiatives in scheduling an airline flight. The three system initiatives were constructed as strict system, which did not allow the user control of the interaction; mixed system, which allowed the user some control of the interaction but with a system override; and user system, which allowed the user control of the interaction. In order to eliminate gender bias in using the FFM as the instrument, participants were matched in gender and age. Participants were 18 years old to 70 years old, passed a hearing test, had no disability that prohibited the use of the SDS, and were native English speakers. Participants completed an adult consent form, a 50-question personality assessment as described by the FFM, and the interaction with the SDS. Participants also completed a system preference indication form at the end of the interaction. Observations for efficiency were recorded on paper by the researcher. Although the findings did not show a definitive predictor for a SDS due to the small population sample, by using a multinomial regression approach to the statistical analysis, odds ratios of the data helped draw conclusions that support certain personality factors as important roles in a user’s preference and efficiency in choosing and using a SDS. This gives an area for future research. Also, the presumption that preference and efficiency always match was not supported by the results from two of the three systems. An additional area for future research was discovered in the gender data. Although not an initial part of the research, the data shows promise in predicting preference and efficiency for certain SDS. Future research is indicated

    The State of Speech in HCI: Trends, Themes and Challenges

    Get PDF
    corecore