103 research outputs found

    Facilitating Reliable Autonomy with Human-Robot Interaction

    Get PDF
    Autonomous robots are increasingly deployed to complex environments in which we cannot predict all possible failure cases a priori. Robustness to failures can be provided by humans enacting the roles of: (1) developers who can iteratively incorporate robustness into the robot system, (2) collocated bystanders who can be approached for aid, and (3) remote teleoperators who can be contacted for guidance. However, assisting the robot in any of these roles can place demands on the time or effort of the human. This dissertation develops modules to reduce the frequency and duration of failure interventions in order to increase the reliability of autonomous robots, while also reducing the demand on humans. In pursuit of that goal, the dissertation makes the following contributions: (1) A development paradigm for autonomous robots that separates task specification from error recovery. The paradigm reduces burden on developers while making the robot robust to failures. (2) A model for gauging the interruptibility of collocated humans. A human-subjects study shows that using the model can reduce the time expended by the robot during failure recovery. (3) A human-subjects experiment on the effects of decision support provided to remote operators during failures. The results show that humans need both diagnosis and action recommendations as decision support during an intervention. (4) An evaluation of model features and unstructured Machine Learning (ML) techniques in pursuit of learning robust suggestions models from intervention data, in order to reduce developer effort. The results indicate that careful crafting of features can lead to improved performance, but that without such feature selection, current ML algorithms lack robustness in addressing a domain where the robot's observations are heavily influenced by the user's actions.Ph.D

    Engagement-aware computing: Modelling user engagement from mobile contexts

    Get PDF
    In this paper, we examine the potential of using mobile context to model user engagement. Taking an experimental approach, we systematically explore the dynamics of user engagement with a smartphone through three different studies. Specifically, to understand the feasibility of detecting user engagement from mobile context, we first assess an EEG artifact with 10 users and observe a strong correlation between automatically detected engagement scores and user's subjective perception of engagement. Grounded on this result, we model a set of application level features derived from smartphone usage of 10 users to detect engagement of a usage session using a Random Forest classifier. Finally, we apply this model to train a variety of contextual factors acquired from smartphone usage logs of 130 users to predict user engagement using an SVM classifier with a F1-Score of 0.82. Our experimental results highlight the potential of mobile contexts in designing engagement-aware applications and provide guidance to future explorations

    The Immune System: the ultimate fractionated cyber-physical system

    Full text link
    In this little vision paper we analyze the human immune system from a computer science point of view with the aim of understanding the architecture and features that allow robust, effective behavior to emerge from local sensing and actions. We then recall the notion of fractionated cyber-physical systems, and compare and contrast this to the immune system. We conclude with some challenges.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    On the Sociability of a Game-Playing Agent: A Software Framework and Empirical Study

    Get PDF
    The social element of playing games is what makes us play together to enjoy more than just what the game itself has to offer. There are millions of games with different rules and goals; They are played by people of many cultures and various ages. However, this social element remains as crucial. Nowadays, the role of social robots and virtual agents is rapidly expanding in daily activities and entertainment and one of these areas is games. Therefore, it seems desirable for an agent to be able to play games socially, as opposed to simply having the computer make the moves in game application. To achieve this goal, verbal and non-verbal communication should be inspired by the game events and human input, to create a human-like social experience. Moreover, a better social interaction can be created if the agent can change its game strategies in accordance with social criteria. To bring sociability to the gaming experience with many different robots, virtual agents and games, we have developed a generic software framework which generates social comments based on the gameplay semantics. We also conducted a user study, with this framework as a core component, involving the rummy card game and the checkers board game. In our analysis, we observed both subjective and objective measures of the effects of social gaze and comments in the gaming interactions. Participants\u27 gaming experience proved to be significantly more social, human-like, enjoyable and adoptable when social behaviors were employed. Moreover, since facial expressions can be a strong indication of internal state, we measured the number of participants\u27 smiles during the gameplay and observed them to smile significantly more when social behaviors were involved than when they were not
    • …
    corecore