6,468 research outputs found

    AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video

    Full text link
    Developing useful interfaces between brains and machines is a grand challenge of neuroengineering. An effective interface has the capacity to not only interpret neural signals, but predict the intentions of the human to perform an action in the near future; prediction is made even more challenging outside well-controlled laboratory experiments. This paper describes our approach to detect and to predict natural human arm movements in the future, a key challenge in brain computer interfacing that has never before been attempted. We introduce the novel Annotated Joints in Long-term ECoG (AJILE) dataset; AJILE includes automatically annotated poses of 7 upper body joints for four human subjects over 670 total hours (more than 72 million frames), along with the corresponding simultaneously acquired intracranial neural recordings. The size and scope of AJILE greatly exceeds all previous datasets with movements and electrocorticography (ECoG), making it possible to take a deep learning approach to movement prediction. We propose a multimodal model that combines deep convolutional neural networks (CNN) with long short-term memory (LSTM) blocks, leveraging both ECoG and video modalities. We demonstrate that our models are able to detect movements and predict future movements up to 800 msec before movement initiation. Further, our multimodal movement prediction models exhibit resilience to simulated ablation of input neural signals. We believe a multimodal approach to natural neural decoding that takes context into account is critical in advancing bioelectronic technologies and human neuroscience

    A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss.

    Get PDF
    Many upper limb amputees experience an incessant, post-amputation "phantom limb pain" and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech "rubber hand" illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the "BairClaw" presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

    Full text link
    We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks. The controller also combines VAE-GAN-based reconstruction with autoregressive multimodal action prediction. Our results demonstrate that it is possible to learn complex manipulation tasks, such as picking up a towel, wiping an object, and depositing the towel to its previous position, entirely from raw images with direct behavior cloning. We show that weight sharing and reconstruction-based regularization substantially improve generalization and robustness, and training on multiple tasks simultaneously increases the success rate on all tasks

    Miniaturized modular manipulator design for high precision assembly and manipulation tasks

    Get PDF
    In this paper, design and control issues for the development of miniaturized manipulators which are aimed to be used in high precision assembly and manipulation tasks are presented. The developed manipulators are size adapted devices, miniaturized versions of conventional robots based on well-known kinematic structures. 3 degrees of freedom (DOF) delta robot and a 2 DOF pantograph mechanism enhanced with a rotational axis at the tip and a Z axis actuating the whole mechanism are given as examples of study. These parallel mechanisms are designed and developed to be used in modular assembly systems for the realization of high precision assembly and manipulation tasks. In that sense, modularity is addressed as an important design consideration. The design procedures are given in details in order to provide solutions for miniaturization and experimental results are given to show the achieved performances

    Can exercise suppress tumour growth in advanced prostate cancer patients with sclerotic bone metastases? A randomised, controlled study protocol examining feasibility, safety and efficacy

    Get PDF
    Introduction Exercise may positively alter tumour biology through numerous modulatory and regulatory mechanisms in response to a variety of modes and dosages, evidenced in preclinical models to date. Specifically, localised and systemic biochemical alterations produced during and following exercise may suppress tumour formation, growth and distribution by virtue of altered epigenetics and endocrine–paracrine activity. Given the impressive ability of targeted mechanical loading to interfere with metastasis-driven tumour formation in human osteolytic tumour cells, it is of equal interest to determine whether a similar effect is observed in sclerotic tumour cells. The study aims to (1) establish the feasibility and safety of a combined modular multimodal exercise programme with spinal isometric training in advanced prostate cancer patients with sclerotic bone metastases and (2) examine whether targeted and supervised exercise can suppress sclerotic tumour growth and activity in spinal metastases in humans. Methods and analysis A single-blinded, two-armed, randomised, controlled and explorative phase I clinical trial combining spinal isometric training with a modular multimodal exercise programme in 40 men with advanced prostate cancer and stable sclerotic spinal metastases. Participants will be randomly assigned to (1) the exercise intervention or (2) usual medical care. The intervention arm will receive a 3-month, supervised and individually tailored modular multimodal exercise programme with spinal isometric training. Primary endpoints (feasibility and safety) and secondary endpoints (tumour morphology; biomarker activity; anthropometry; musculoskeletal health; adiposity; physical function; quality of life; anxiety; distress; fatigue; insomnia; physical activity levels) will be measured at baseline and following the intervention. Statistical analyses will include descriptive characteristics, t-tests, effect sizes and two-way (group × time) repeated-measures analysis of variance (or analysis of covariance) to examine differences between groups over time. The data-set will be primarily examined using an intention-to-treat approach with multiple imputations, followed by a secondary sensitivity analysis to ensure data robustness using a complete cases approach. Ethics and dissemination Ethics approval was obtained from the Human Research Ethics Committee (HREC) of Edith Cowan University and the Sir Charles Gairdner and Osborne Park Health Care Group. If proven to be feasible and safe, this study will form the basis of future phase II and III trials in human patients with advanced cancer. To reach a maximum number of clinicians, practitioners, patients and scientists, outcomes will be disseminated through national and international clinical, conference and patient presentations, as well as publication in high-impact, peer-reviewed academic journals

    Body models in humans, animals, and robots: mechanisms and plasticity

    Full text link
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent - yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables to control the machine. In this article I compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. I put forth a number of axes regarding the nature of such body models: fixed vs. plastic, amodal vs. modal, explicit vs. implicit, serial vs. parallel, modular vs. holistic, and centralized vs. distributed. An interesting trend emerges: on many of the axes, there is a sequence from robot body models, over body image, body schema, to the body representation in lower animals like the octopus. In some sense, robots have a lot in common with Ian Waterman - "the man who lost his body" - in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. I will then detail how robots can inform the biological sciences dealing with body representations and finally, I will study which of the features of the "body in the brain" should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure

    Mechanical suppression of osteolytic bone metastases in advanced breast cancer patients: A randomised controlled study protocol evaluating safety, feasibility and preliminary efficacy of exercise as a targeted medicine

    Get PDF
    Background: Skeletal metastases present a major challenge for clinicians, representing an advanced and typically incurable stage of cancer. Bone is also the most common location for metastatic breast carcinoma, with skeletal lesions identified in over 80% of patients with advanced breast cancer. Preclinical models have demonstrated the ability of mechanical stimulation to suppress tumour formation and promote skeletal preservation at bone sites with osteolytic lesions, generating modulatory interference of tumour-driven bone remodelling. Preclinical studies have also demonstrated anti-cancer effects through exercise by minimising tumour hypoxia, normalising tumour vasculature and increasing tumoural blood perfusion. This study proposes to explore the promising role of targeted exercise to suppress tumour growth while concomitantly delivering broader health benefits in patients with advanced breast cancer with osteolytic bone metastases. Methods: This single-blinded, two-armed, randomised and controlled pilot study aims to establish the safety, feasibility and efficacy of an individually tailored, modular multi-modal exercise programme incorporating spinal isometric training (targeted muscle contraction) in 40 women with advanced breast cancer and stable osteolytic spinal metastases. Participants will be randomly assigned to exercise or usual medical care. The intervention arm will receive a 3-month clinically supervised exercise programme, which if proven to be safe and efficacious will be offered to the control-arm patients following study completion. Primary endpoints (programme feasibility, safety, tolerance and adherence) and secondary endpoints (tumour morphology, serum tumour biomarkers, bone metabolism, inflammation, anthropometry, body composition, bone pain, physical function and patient-reported outcomes) will be measured at baseline and following the intervention. Discussion: Exercise medicine may positively alter tumour biology through numerous mechanical and nonmechanical mechanisms. This randomised controlled pilot trial will explore the preliminary effects of targeted exercise on tumour morphology and circulating metastatic tumour biomarkers using an osteolytic skeletal metastases model in patients with breast cancer. The study is principally aimed at establishing feasibility and safety. If proven to be safe and feasible, results from this study could have important implications for the delivery of this exercise programme to patients with advanced cancer and sclerotic skeletal metastases or with skeletal lesions present in haematological cancers (such as osteolytic lesions in multiple myeloma), for which future research is recommended. Trial registration: anzctr.org.au, ACTRN-12616001368426. Registered on 4 October 2016

    Stable Electromyographic Sequence Prediction During Movement Transitions using Temporal Convolutional Networks

    Full text link
    Transient muscle movements influence the temporal structure of myoelectric signal patterns, often leading to unstable prediction behavior from movement-pattern classification methods. We show that temporal convolutional network sequential models leverage the myoelectric signal's history to discover contextual temporal features that aid in correctly predicting movement intentions, especially during interclass transitions. We demonstrate myoelectric classification using temporal convolutional networks to effect 3 simultaneous hand and wrist degrees-of-freedom in an experiment involving nine human-subjects. Temporal convolutional networks yield significant (p<0.001)(p<0.001) performance improvements over other state-of-the-art methods in terms of both classification accuracy and stability.Comment: 4 pages, 5 figures, accepted for Neural Engineering (NER) 2019 Conferenc
    corecore