59,318 research outputs found

    Logical Learning Through a Hybrid Neural Network with Auxiliary Inputs

    Full text link
    The human reasoning process is seldom a one-way process from an input leading to an output. Instead, it often involves a systematic deduction by ruling out other possible outcomes as a self-checking mechanism. In this paper, we describe the design of a hybrid neural network for logical learning that is similar to the human reasoning through the introduction of an auxiliary input, namely the indicators, that act as the hints to suggest logical outcomes. We generate these indicators by digging into the hidden information buried underneath the original training data for direct or indirect suggestions. We used the MNIST data to demonstrate the design and use of these indicators in a convolutional neural network. We trained a series of such hybrid neural networks with variations of the indicators. Our results show that these hybrid neural networks are very robust in generating logical outcomes with inherently higher prediction accuracy than the direct use of the original input and output in apparent models. Such improved predictability with reassured logical confidence is obtained through the exhaustion of all possible indicators to rule out all illogical outcomes, which is not available in the apparent models. Our logical learning process can effectively cope with the unknown unknowns using a full exploitation of all existing knowledge available for learning. The design and implementation of the hints, namely the indicators, become an essential part of artificial intelligence for logical learning. We also introduce an ongoing application setup for this hybrid neural network in an autonomous grasping robot, namely as_DeepClaw, aiming at learning an optimized grasping pose through logical learning.Comment: 11 pages, 9 figures, 4 table

    Intention and motor representation in purposive action

    Get PDF
    Are there distinct roles for intention and motor representation in explaining the purposiveness of action? Standard accounts of action assign a role to intention but are silent on motor representation. The temptation is to suppose that nothing need be said here because motor representation is either only an enabling condition for purposive action or else merely a variety of intention. This paper provides reasons for resisting that temptation. Some motor representations, like intentions, coordinate actions in virtue of representing outcomes; but, unlike intentions, motor representations cannot feature as premises or conclusions in practical reasoning. This implies that motor representation has a distinctive role in explaining the purposiveness of action. It also gives rise to a problem: were the roles of intention and motor representation entirely independent, this would impair effective action. It is therefore necessary to explain how intentions interlock with motor representations. The solution, we argue, is to recognise that the contents of intentions can be partially determined by the contents of motor representations. Understanding this content-determining relation enables better understanding how intentions relate to actions
    corecore