2,223 research outputs found

    Modelling and correcting for the impact of the gait cycle on touch screen typing accuracy

    Get PDF
    Walking and typing on a smartphone is an extremely common interaction. Previous research has shown that error rates are higher when walking than when stationary. In this paper we analyse the acceleration data logged in an experiment in which users typed whilst walking, and extract the gait phase angle. We find statistically significant relationships between tapping time, error rate and gait phase angle. We then use the gait phase as an additional input to an offset model, and show that this allows more accurate touch interaction for walking users than a model which considers only the recorded tap position

    Sub-Optimal Allocation of Time in Sequential Movements

    Get PDF
    The allocation of limited resources such as time or energy is a core problem that organisms face when planning complex actions. Most previous research concerning planning of movement has focused on the planning of single, isolated movements. Here we investigated the allocation of time in a pointing task where human subjects attempted to touch two targets in a specified order to earn monetary rewards. Subjects were required to complete both movements within a limited time but could freely allocate the available time between the movements. The time constraint presents an allocation problem to the subjects: the more time spent on one movement, the less time is available for the other. In different conditions we assigned different rewards to the two tokens. How the subject allocated time between movements affected their expected gain on each trial. We also varied the angle between the first and second movements and the length of the second movement. Based on our results, we developed and tested a model of speed-accuracy tradeoff for sequential movements. Using this model we could predict the time allocation that would maximize the expected gain of each subject in each experimental condition. We compared human performance with predicted optimal performance. We found that all subjects allocated time sub-optimally, spending more time than they should on the first movement even when the reward of the second target was five times larger than the first. We conclude that the movement planning system fails to maximize expected reward in planning sequences of as few as two movements and discuss possible interpretations drawn from economic theory

    Algorithms for Neural Prosthetic Applications

    Get PDF
    abstract: In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Push to know! -- Visuo-Tactile based Active Object Parameter Inference with Dual Differentiable Filtering

    Full text link
    For robotic systems to interact with objects in dynamic environments, it is essential to perceive the physical properties of the objects such as shape, friction coefficient, mass, center of mass, and inertia. This not only eases selecting manipulation action but also ensures the task is performed as desired. However, estimating the physical properties of especially novel objects is a challenging problem, using either vision or tactile sensing. In this work, we propose a novel framework to estimate key object parameters using non-prehensile manipulation using vision and tactile sensing. Our proposed active dual differentiable filtering (ADDF) approach as part of our framework learns the object-robot interaction during non-prehensile object push to infer the object's parameters. Our proposed method enables the robotic system to employ vision and tactile information to interactively explore a novel object via non-prehensile object push. The novel proposed N-step active formulation within the differentiable filtering facilitates efficient learning of the object-robot interaction model and during inference by selecting the next best exploratory push actions (where to push? and how to push?). We extensively evaluated our framework in simulation and real-robotic scenarios, yielding superior performance to the state-of-the-art baseline.Comment: 8 pages. Accepted at IROS 202

    Human Inspired Multi-Modal Robot Touch

    Get PDF

    Neural representation in active inference: using generative models to interact with -- and understand -- the lived world

    Full text link
    This paper considers neural representation through the lens of active inference, a normative framework for understanding brain function. It delves into how living organisms employ generative models to minimize the discrepancy between predictions and observations (as scored with variational free energy). The ensuing analysis suggests that the brain learns generative models to navigate the world adaptively, not (or not solely) to understand it. Different living organisms may possess an array of generative models, spanning from those that support action-perception cycles to those that underwrite planning and imagination; namely, from "explicit" models that entail variables for predicting concurrent sensations, like objects, faces, or people - to "action-oriented models" that predict action outcomes. It then elucidates how generative models and belief dynamics might link to neural representation and the implications of different types of generative models for understanding an agent's cognitive capabilities in relation to its ecological niche. The paper concludes with open questions regarding the evolution of generative models and the development of advanced cognitive abilities - and the gradual transition from "pragmatic" to "detached" neural representations. The analysis on offer foregrounds the diverse roles that generative models play in cognitive processes and the evolution of neural representation

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Bayesian decoding of tactile afferents responsible for sensorimotor control

    Get PDF
    In daily activities, humans manipulate objects and do so with great precision. Empirical studies have demonstrated that signals encoded by mechanoreceptors facilitate the precise object manipulation in humans, however, little is known about the underlying mechanisms. Models used in literature to analyze tactile afferent data range from advanced—for example some models account for skin tissue properties—to simple regression fit. These models, however, do not systematically account for factors that influence tactile afferent activity. For instance, it is not yet clear whether the first derivative of force influences the observed tactile afferent spike train patterns. In this study, I use the technique of microneurography—with the help of Dr. Birznieks—to record tactile afferent data from humans. I then implement spike sorting algorithms to identify spike occurrences that pertain to a single cell. For further analyses of the resulting spike trains, I use a Bayesian decoding framework to investigate tactile afferent mechanisms that are responsible for sensorimotor control in humans. The Bayesian decoding framework I implement is a two stage process where in a first stage (encoding model) the relationships between the administered stimuli and the recorded tactile afferent signals is established, and a second stage uses results based on the first stage to make predictions. The goal of encoding model is to increase our understanding of the mechanisms that underlie dexterous object manipulation and, from an engineering perspective, guide the design of algorithms for inferring stimulus from previously unseen tactile afferent data, a process referred to as decoding. Specifically, the objective of the study was to devise quantitative methods that would provide insight into some mechanisms that underlie touch, as well as provide strategies through which real-time biomedical devices can be realized. Tactile afferent data from eight subjects (18 - 30 years) with no known form of neurological disorders were recorded by inserting a needle electrode in the median nerve at the wrist. I was involved in designing experimental protocols, designing mechanisms that were put in place for safety measures, designing and building electronic components as needed, experimental setup, subject recruitment, and data acquisition. Dr. Ingvars Birznieks (performed the actual microneurography procedure by inserting a needle electrode into the nerve and identifying afferent types) and Dr. Heba Khamis provided assistance with the data acquisition and experimental design. The study took place at Neuroscience Research Australia (NeuRA). Once the data were acquired, I analyzed the data recorded from slowly adapting type I tactile afferents (SA-I). The initial stages of data analysis involved writing software routines to spike sort the data (identify action potential waveforms that pertain to individual cells). I analyzed SA-I tactile afferents because they were more numerous (it was difficult to target other types of afferents during experiments). In addition, SA-I tactile afferents respond during both the dynamic and the static phase of a force stimulus. Since they respond during both the dynamic and static phases of the force stimulus, it seemed reasonable to hypothesize that SA-I’s alone could provide sufficient information for predicting the force profile, given spike data. In the first stage, I used an inhomogeneous Poisson process encoding model through which I assessed the relative importance of aspects of the stimuli to observed spike data. In addition I estimated the likelihood for SA-I data given the inhomogeneous Poisson model, which was used during the second stage. The likelihood is formulated by deriving the joint distribution of the data, as a function of the model parameters with the data fixed. In the second stage, I used a recursive nonlinear Bayesian filter to reconstruct the force profile, given the SA-I spike patterns. Moreover, the decoding method implemented in this thesis is feasible for real-time applications such as interfacing with prostheses because it can be realized with readily available electronic components. I also implemented a renewal point process encoding model—as a generalization of the Poisson process encoding model—which can account for some history dependence properties of neural data. I discovered that under my encoding model, the relative contributions of the force and its derivative are 1.26 and 1.02, respectively. This suggests that the force derivative contributes significantly to the spiking behavior of SA-I tactile afferents. This is a novel contribution because it provides a quantitative result to the long standing question of whether the force derivative contributes towards SA-I tactile afferent spiking behavior. As a result, I incorporated the first derivative of force, along with the force, in the encoding models I implemented in this thesis. The decoding model shows that SA-I fibers provide sufficient information for an approximation of the force profile. Furthermore, including fast adapting tactile afferents would provide better information about the first moment of contact and last moment of contact, and thus improved decoding results. Finally I show that a renewal point process encoding model captures interspike time and stimulus features better than an inhomogeneous Poisson point process encoding model. This is useful because it is now possible to generate synthetic data with statistical structure that is similar to real SA-I data: This would enable further investigations of mechanisms that underlie SA-I tactile afferents
    • …
    corecore