1 research outputs found

    Hierarchical Task Recognition and Planning in Smart Homes with Partial Observability

    Get PDF
    Older adults with cognitive impairment have significantly burdened their families and the society due to costly caring and waste of labors. Developing intelligent assistant agents (IAAs) in smart homes that can help those people accomplishing activities of daily living (ADLs) independently has attracted tremendous attention, from both academia and industry. Ideally, IAAs should recognize older adults’ goals and reason about further steps needed for the goals. This paper proposed a goal recognition and planning algorithm to support an IAA in smart home. The algorithm addresses several important issues. First it can deal with partial observability by Bayesian inference for step recognition. Even advanced sensors are not guaranteed to be 100% reliable. Besides, due to limited accessibility or privacy, not all attributes of physical objects can be measured by sensors. The proposed algorithm can reason about ongoing goals with some sensors missing or unreliable. Second, the algorithm reasons about concurrent goals. For everyday life, a person is typically involved in multi-tasks by switching back and forth. Based on the context, the proposed algorithm can assign a step to the correct goal and keep tracks of the goal’s ongoing status. The context involves status of ongoing goals inferred from a recognition procedure, and desired next steps and tasks, which are obtained through a planning procedure. Last but not least, the algorithm can handle incorrectly executed steps. For older adults with cognitive impairment, executing unrelated or wrong steps towards certain goals is common in their daily life. A module is designed to hand wrong steps by detecting and then prompt the person with correct steps. The algorithm is based on Hierarchical Task Network (HTN), of which the knowledge base is composed of methods (for tasks) and operators (for steps). Such hierarchical modeling of tasks and steps enables the algorithm to deal with partially ordered subtasks and alternative plans. Furthermore, the preconditions of methods and operators enable to generate feasible hints of next steps and tasks by considering uncertainties in belief states. In the experiment, a simulator is designed to simulate the virtual sensors and a virtual human executing a sequence of steps predefined in a test case. The algorithm is tested on many simulated easy or difficult cases. For example single goal and correct steps are easy test cases. Having multiple goals with wrong steps makes the problem more difficult. Also cases of sensors missing are experimented. The results shows that the algorithm works very well on simple cases, achieving nearly 100% accuracy. Even for the hardest cases, the performance is acceptable when sensor reliabilities are above 0.95. Test cases with missing sensors also provide meaningful guideline for setting up sensors for an intelligent assistant agent
    corecore