49,698 research outputs found

    Planning Dialog Actions

    Get PDF
    The problem of planning dialog moves can be viewed as an instance of the more general AI problem of planning with incomplete information and sensing. Sensing actions complicate the planning process since such actions engender potentially infinite state spaces. We adapt the Linear Dynamic Event Calculus (LDEC) to the representation of dialog acts using insights from the PKS planner, and show how this formalism can be applied to the problem of planning mixed-initiative collaborative discourse

    A Cognitive Architecture for the Coordination of Utterances

    Get PDF
    Dialog partners coordinate with each other to reach a common goal. The analogy with other joint activities has sparked interesting observations (e.g., about the norms governing turn-taking) and has informed studies of linguistic alignment in dialog. However, the parallels between language and action have not been fully explored, especially with regard to the mechanisms that support moment-by-moment coordination during language use in conversation. We review the literature on joint actions to show (i) what sorts of mechanisms allow coordination and (ii) which types of experimental paradigms can be informative of the nature of such mechanisms. Regarding (i), there is converging evidence that the actions of others can be represented in the same format as one’s own actions. Furthermore, the predicted actions of others are taken into account in the planning of one’s own actions. Similarly, we propose that interlocutors are able to coordinate their acts of production because they can represent their partner’s utterances. They can then use these representations to build predictions, which they take into account when planning self-generated utterances. Regarding (ii), we propose a new methodology to study interactive language. Psycholinguistic tasks that have traditionally been used to study individual language production are distributed across two participants, who either produce two utterances simultaneously or complete each other’s utterances

    Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior

    Get PDF
    We aim to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains, such as military field operations and disaster response. Deployment plans for these operations are frequently negotiated on-the-fly by teams of human planners. A human operator then translates the agreed upon plan into machine instructions for the robots. We present an algorithm that reduces this translation burden by inferring the final plan from a processed form of the human team's planning conversation. Our approach combines probabilistic generative modeling with logical plan validation used to compute a highly structured prior over possible plans. This hybrid approach enables us to overcome the challenge of performing inference over the large solution space with only a small amount of noisy data from the team planning session. We validate the algorithm through human subject experimentation and show we are able to infer a human team's final plan with 83% accuracy on average. We also describe a robot demonstration in which two people plan and execute a first-response collaborative task with a PR2 robot. To the best of our knowledge, this is the first work that integrates a logical planning technique within a generative model to perform plan inference.Comment: Appears in Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence (AAAI-13

    iCORPP: Interleaved Commonsense Reasoning and Probabilistic Planning on Robots

    Full text link
    Robot sequential decision-making in the real world is a challenge because it requires the robots to simultaneously reason about the current world state and dynamics, while planning actions to accomplish complex tasks. On the one hand, declarative languages and reasoning algorithms well support representing and reasoning with commonsense knowledge. But these algorithms are not good at planning actions toward maximizing cumulative reward over a long, unspecified horizon. On the other hand, probabilistic planning frameworks, such as Markov decision processes (MDPs) and partially observable MDPs (POMDPs), well support planning to achieve long-term goals under uncertainty. But they are ill-equipped to represent or reason about knowledge that is not directly related to actions. In this article, we present a novel algorithm, called iCORPP, to simultaneously estimate the current world state, reason about world dynamics, and construct task-oriented controllers. In this process, robot decision-making problems are decomposed into two interdependent (smaller) subproblems that focus on reasoning to "understand the world" and planning to "achieve the goal" respectively. Contextual knowledge is represented in the reasoning component, which makes the planning component epistemic and enables active information gathering. The developed algorithm has been implemented and evaluated both in simulation and on real robots using everyday service tasks, such as indoor navigation, dialog management, and object delivery. Results show significant improvements in scalability, efficiency, and adaptiveness, compared to competitive baselines including handcrafted action policies

    A pollen identification expert system ; an application of expert system techniques to biological identification : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science Massey University

    Get PDF
    The application of expert systems techniques to biological identification has been investigated and a system developed which assists a user to identify and count air-borne pollen grains. The present system uses a modified taxonomic data matrix as the structure for the knowledge base. This allows domain experts to easily assess and modify the knowledge using a familiar data structure. The data structure can be easily converted to rules or a simple frame-based structure if required for other applications. A method of ranking the importance of characters for identifying each taxon has been developed which assists the system to quickly narrow an identification by rejecting or accepting candidate taxa. This method is very similar to that used by domain experts

    Action Selection for Interaction Management: Opportunities and Lessons for Automated Planning

    Get PDF
    The central problem in automated planning---action selection---is also a primary topic in the dialogue systems research community, however, the nature of research in that community is significantly different from that of planning, with a focus on end-to-end systems and user evaluations. In particular, numerous toolkits are available for developing speech-based dialogue systems that include not only a method for representing states and actions, but also a mechanism for reasoning and selecting the actions, often combined with a technical framework designed to simplify the task of creating end-to-end systems. We contrast this situation with that of automated planning, and argue that the dialogue systems community could benefit from some of the directions adopted by the planning community, and that there also exist opportunities and lessons for automated planning

    Learning and Reasoning for Robot Sequential Decision Making under Uncertainty

    Full text link
    Robots frequently face complex tasks that require more than one action, where sequential decision-making (SDM) capabilities become necessary. The key contribution of this work is a robot SDM framework, called LCORPP, that supports the simultaneous capabilities of supervised learning for passive state estimation, automated reasoning with declarative human knowledge, and planning under uncertainty toward achieving long-term goals. In particular, we use a hybrid reasoning paradigm to refine the state estimator, and provide informative priors for the probabilistic planner. In experiments, a mobile robot is tasked with estimating human intentions using their motion trajectories, declarative contextual knowledge, and human-robot interaction (dialog-based and motion-based). Results suggest that, in efficiency and accuracy, our framework performs better than its no-learning and no-reasoning counterparts in office environment.Comment: In proceedings of 34th AAAI conference on Artificial Intelligence, 202
    • 

    corecore