17 research outputs found

    Planning and acting with an integrated sense of space

    No full text
    The paper describes PECAS, an architecture for intelligent systems, and its application in the Explorer, an interactive mobile robot. PECAS is a new architectural combination of information fusion and continual planning. PECAS plans, integrates and monitors the asynchronous flow of information between multiple concurrent systems. Information fusion provides a suitable intermediary to robustly couple the various reactive and deliberative forms of processing used concurrently in the Explorer. The Explorer instantiates PECAS around a hybrid spatial model combining SLAM, visual search, and conceptual inference. This paper describes the elements of this model, and demonstrates on an implemented scenario how PECAS provides means for flexible control.

    Oat β-glucan containing bread increases the glycaemic profile

    No full text
    A net postprandial glucose increment beyond 2 h has been shown to improve glucose and appetite regulation at a subsequent meal. Such an improved glycaemic profile (GP) has been reported for bread containing guar gum. In the present study three commercially available β-glucans from barley and oat were baked into yeast leavened bread products. Only oat beta-glucan containing bread met the criteria of β-glucan molecular weight and was included in a meal study. The three levels of oat β-glucans reduced the GI and glucose iPeak by 32–37% compared to a white wheat reference bread. Furthermore, the highest oat β-glucan level increased GP by 66% compared to the reference bread. It is concluded that the oat β-glucans were suitable for use in baking, since the MW remained relatively high. Thus, the oat ingredient showed an interesting potential to be used when tailoring the glycaemic profile of bread products

    Robot task planning and explanation in open and uncertain worlds

    No full text
    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization
    corecore