36 research outputs found
Acquisition of Viewpoint Transformation and Action Mappings via Sequence to Sequence Imitative Learning by Deep Neural Networks
We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposed model, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns
Recommended from our members
Brain Tumor Cells in Circulation Are Enriched for Mesenchymal Gene Expression
Glioblastoma (GBM) is a highly aggressive brain cancer characterized by local invasion and angiogenic recruitment, yet metastatic dissemination is extremely rare. Here, we adapted a microfluidic device to deplete hematopoietic cells from blood specimens of patients with GBM, uncovering evidence of circulating brain tumor cells (CTCs). Staining and scoring criteria for GBM CTCs were first established using orthotopic patient-derived xenografts (PDX), and then applied clinically: CTCs were identified in at least one blood specimen from 13/33 patients (39%; 26/87 samples). Single GBM CTCs isolated from both patients and mouse PDX models demonstrated enrichment for mesenchymal over neural differentiation markers, compared with primary GBMs. Within primary GBMs, RNA-in-situ hybridization identifies a subpopulation of highly migratory mesenchymal tumor cells, and in a rare patient with disseminated GBM, systemic lesions were exclusively mesenchymal. Thus, a mesenchymal subset of GBM cells invades into the vasculature, and may proliferate outside the brain
COVID-19 symptoms at hospital admission vary with age and sex: results from the ISARIC prospective multinational observational study
Background:
The ISARIC prospective multinational observational study is the largest cohort of hospitalized patients with COVID-19. We present relationships of age, sex, and nationality to presenting symptoms.
Methods:
International, prospective observational study of 60â109 hospitalized symptomatic patients with laboratory-confirmed COVID-19 recruited from 43 countries between 30 January and 3 August 2020. Logistic regression was performed to evaluate relationships of age and sex to published COVID-19 case definitions and the most commonly reported symptoms.
Results:
âTypicalâ symptoms of fever (69%), cough (68%) and shortness of breath (66%) were the most commonly reported. 92% of patients experienced at least one of these. Prevalence of typical symptoms was greatest in 30- to 60-year-olds (respectively 80, 79, 69%; at least one 95%). They were reported less frequently in children (â€â18 years: 69, 48, 23; 85%), older adults (â„â70 years: 61, 62, 65; 90%), and women (66, 66, 64; 90%; vs. men 71, 70, 67; 93%, each Pâ<â0.001). The most common atypical presentations under 60 years of age were nausea and vomiting and abdominal pain, and over 60 years was confusion. Regression models showed significant differences in symptoms with sex, age and country.
Interpretation:
This international collaboration has allowed us to report reliable symptom data from the largest cohort of patients admitted to hospital with COVID-19. Adults over 60 and children admitted to hospital with COVID-19 are less likely to present with typical symptoms. Nausea and vomiting are common atypical presentations under 30 years. Confusion is a frequent atypical presentation of COVID-19 in adults over 60 years. Women are less likely to experience typical symptoms than men
Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases
Primacy in Stock Market Participation:The Effect of Initial Returns on Market Re-Entry Decisions
We examine whether initial returns influence investorsâ decisions to return to the stock market following withdrawal. Using a survival analysis technique to estimate Finnish retail investorsâ likelihood of stock market re-entry reveals that investors who experience lower initial returns are less likely to return, even after controlling for returns in the last month and average monthly returns for the duration of investing. This primacy effect is robust to accounting for endogeneity in investorsâ exit decisions, and other behavioural biases such as recency and saliency of investment experience. Individual investors appear to be subject to primacy bias and tend to put a significant weight on initial experiences in re-entry decisions
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as ânot,â âand,â and âorâ simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In humanârobot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as âtrue,â âfalse,â and ânotâ work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word âand,â which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word âor,â which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system
Multimodal integration learning of robot behavior using deep neural networks
AbstractFor humans to accurately understand the world around them, multimodal integration is essential because it enhances perceptual precision and reduces ambiguity. Computational models replicating such human ability may contribute to the practical use of robots in daily human living environments; however, primarily because of scalability problems that conventional machine learning algorithms suffer from, sensory-motor information processing in robotic applications has typically been achieved via modal-dependent processes. In this paper, we propose a novel computational framework enabling the integration of sensory-motor time-series data and the self-organization of multimodal fused representations based on a deep learning approach. To evaluate our proposed model, we conducted two behavior-learning experiments utilizing a humanoid robot; the experiments consisted of object manipulation and bell-ringing tasks. From our experimental results, we show that large amounts of sensory-motor information, including raw RGB images, sound spectrums, and joint angles, are directly fused to generate higher-level multimodal representations. Further, we demonstrated that our proposed framework realizes the following three functions: (1) cross-modal memory retrieval utilizing the information complementation capability of the deep autoencoder; (2) noise-robust behavior recognition utilizing the generalization capability of multimodal features; and (3) multimodal causality acquisition and sensory-motor prediction based on the acquired causality
Recommended from our members
CREATING NOVEL GOAL-DIRECTED ACTIONS AT CRITICALITY: A NEURO-ROBOTIC EXPERIMENT
The present study examines the possible roles of cortical chaos in generating novel actions for achieving specified goals. The proposed neural network model consists of a sensory-forward model responsible for parietal lobe functions, a chaotic network model for premotor functions and prefrontal cortex model responsible for manipulating the initial state of the chaotic network. Experiments using humanoid robot were performed with the model and showed that the action plans for satisfying specific novel goals can be generated by diversely modulating and combining prior-learned behavioral patterns at critical dynamical states. Although this criticality resulted in fragile goal achievements in the physical environment of the robot, the reinforcement of the successful trials was able to provide a substantial gain with respect to the robustness. The discussion leads to the hypothesis that the consolidation of numerous sensory-motor experiences into the memory, meditating diverse imagery in the memory by cortical chaos, and repeated enaction and reinforcement of newly generated effective trials are indispensable for realizing an open-ended development of cognitive behaviors
Recommended from our members
CREATING NOVEL GOAL-DIRECTED ACTIONS AT CRITICALITY: A NEURO-ROBOTIC EXPERIMENT
The present study examines the possible roles of cortical chaos in generating novel actions for achieving specified goals. The proposed neural network model consists of a sensory-forward model responsible for parietal lobe functions, a chaotic network model for premotor functions and prefrontal cortex model responsible for manipulating the initial state of the chaotic network. Experiments using humanoid robot were performed with the model and showed that the action plans for satisfying specific novel goals can be generated by diversely modulating and combining prior-learned behavioral patterns at critical dynamical states. Although this criticality resulted in fragile goal achievements in the physical environment of the robot, the reinforcement of the successful trials was able to provide a substantial gain with respect to the robustness. The discussion leads to the hypothesis that the consolidation of numerous sensory-motor experiences into the memory, meditating diverse imagery in the memory by cortical chaos, and repeated enaction and reinforcement of newly generated effective trials are indispensable for realizing an open-ended development of cognitive behaviors