5 research outputs found

    MODULAR NETWORK SOM (MNSOM): A NEW POWERFUL TOOL IN NEURAL NETWORKS

    Get PDF
    In this paper, a new powerful method in artificial neural networks, called modular network SOM (mnSOM) is introduced. mnSOM is a generalization of  Self Organizing Maps (SOM) formed by replacing each vector unit of SOM with function module. The modular function could be a multi layer perceptron, a recurrent neural network or even SOM itself. Having this flexibility, mnSOM becomes a new powerful tool in artificial neural network

    Task Segmentation in a Mobile Robot by mnSOM and Hierarchical Clustering

    Get PDF
    Our previous studies assigned labels to mnSOM modules based on the assumption that winner modules corresponding to subsequences in the same class share the same label. We propose segmentation using hierarchical clustering based on the resulting mnSOM. Since it does not need the above unrealistic assumption, it gains practical importance at the sacrifice of the deterioration of the segmentation performance by 1.2%. We compare the performance of task segmentation for two kinds of module architecture in mnSOM. The result is that module architecture with sensory-motor signals as target outputs has superior performance to that with only sensory signals as target outputs

    Task Segmentation in a Mobile Robot by mnSOM and Hierarchical Clustering

    Get PDF
    Our previous studies assigned labels to mnSOM modules based on the assumption that winner modules corresponding to subsequences in the same class share the same label. We propose segmentation using hierarchical clustering based on the resulting mnSOM. Since it does not need the above unrealistic assumption, it gains practical importance at the sacrifice of the deterioration of the segmentation performance by 1.2%. We compare the performance of task segmentation for two kinds of module architecture in mnSOM. The result is that module architecture with sensory-motor signals as target outputs has superior performance to that with only sensory signals as target outputs

    A New Approach to Task Segmentation in Mobile Robots by mnSOM

    No full text

    Human-Inspired Robot Task Teaching and Learning

    Get PDF
    Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups. The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback. The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames. An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction. A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills
    corecore