27,486 research outputs found

    The touch and zap method for in vivo whole-cell patch recording of intrinsic and visual responses of cortical neurons and Glial cells

    Get PDF
    Whole-cell patch recording is an essential tool for quantitatively establishing the biophysics of brain function, particularly in vivo. This method is of particular interest for studying the functional roles of cortical glial cells in the intact brain, which cannot be assessed with extracellular recordings. Nevertheless, a reasonable success rate remains a challenge because of stability, recording duration and electrical quality constraints, particularly for voltage clamp, dynamic clamp or conductance measurements. To address this, we describe "Touch and Zap", an alternative method for whole-cell patch clamp recordings, with the goal of being simpler, quicker and more gentle to brain tissue than previous approaches. Under current clamp mode with a continuous train of hyperpolarizing current pulses, seal formation is initiated immediately upon cell contact, thus the "Touch". By maintaining the current injection, whole-cell access is spontaneously achieved within seconds from the cell-attached configuration by a self-limited membrane electroporation, or "Zap", as seal resistance increases. We present examples of intrinsic and visual responses of neurons and putative glial cells obtained with the revised method from cat and rat cortices in vivo. Recording parameters and biophysical properties obtained with the Touch and Zap method compare favourably with those obtained with the traditional blind patch approach, demonstrating that the revised approach does not compromise the recorded cell. We find that the method is particularly well-suited for whole-cell patch recordings of cortical glial cells in vivo, targeting a wider population of this cell type than the standard method, with better access resistance. Overall, the gentler Touch and Zap method is promising for studying quantitative functional properties in the intact brain with minimal perturbation of the cell's intrinsic properties and local network. Because the Touch and Zap method is performed semi-automatically, this approach is more reproducible and less dependent on experimenter technique

    Distinct forms of synaptic inhibition and neuromodulation regulate calretinin positive neuron excitability in the spinal cord dorsal horn

    Get PDF
    The dorsal horn (DH) of the spinal cord contains a heterogenous population of neurons that process incoming sensory signals before information ascends to the brain. We have recently characterized calretinin-expressing (CR+) neurons in the DH and shown that they can be divided into excitatory and inhibitory subpopulations. The excitatory population receives high-frequency excitatory synaptic input and expresses delayed firing action potential discharge, whereas the inhibitory population receives weak excitatory drive and exhibits tonic or initial bursting discharge. Here, we characterize inhibitory synaptic input and neuromodulation in the two CR+ populations, in order to determine how each is regulated. We show that excitatory CR+ neurons receive mixed inhibition from GABAergic and glycinergic sources, whereas inhibitory CR+ neurons receive inhibition, which is dominated by glycine. Noradrenaline and serotonin produced robust outward currents in excitatory CR+ neurons, predicting an inhibitory action on these neurons, but neither neuromodulator produced a response in CR+ inhibitory neurons. In contrast, enkephalin (along with selective mu and delta opioid receptor agonists) produced outward currents in inhibitory CR+ neurons, consistent with an inhibitory action but did not affect the excitatory CR+ population. Our findings show that the pharmacology of inhibitory inputs and neuromodulator actions on CR+ cells, along with their excitatory inputs can define these two subpopulations further, and this could be exploited to modulate discrete aspects of sensory processing selectively in the DH

    Pickup usability dominates: a brief history of mobile text entry research and adoption

    Get PDF
    Text entry on mobile devices (e.g. phones and PDAs) has been a research challenge since devices shrank below laptop size: mobile devices are simply too small to have a traditional full-size keyboard. There has been a profusion of research into text entry techniques for smaller keyboards and touch screens: some of which have become mainstream, while others have not lived up to early expectations. As the mobile phone industry moves to mainstream touch screen interaction we will review the range of input techniques for mobiles, together with evaluations that have taken place to assess their validity: from theoretical modelling through to formal usability experiments. We also report initial results on iPhone text entry speed

    Drifting perceptual patterns suggest prediction errors fusion rather than hypothesis selection: replicating the rubber-hand illusion on a robot

    Full text link
    Humans can experience fake body parts as theirs just by simple visuo-tactile synchronous stimulation. This body-illusion is accompanied by a drift in the perception of the real limb towards the fake limb, suggesting an update of body estimation resulting from stimulation. This work compares body limb drifting patterns of human participants, in a rubber hand illusion experiment, with the end-effector estimation displacement of a multisensory robotic arm enabled with predictive processing perception. Results show similar drifting patterns in both human and robot experiments, and they also suggest that the perceptual drift is due to prediction error fusion, rather than hypothesis selection. We present body inference through prediction error minimization as one single process that unites predictive coding and causal inference and that it is responsible for the effects in perception when we are subjected to intermodal sensory perturbations.Comment: Proceedings of the 2018 IEEE International Conference on Development and Learning and Epigenetic Robotic

    Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication

    Full text link
    We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intra-session authentication, 2%-3% for inter-session authentication and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multi-modal biometric authentication system.Comment: to appear at IEEE Transactions on Information Forensics & Security; Download data from http://www.mariofrank.net/touchalytics

    A supramodal representation of the body surface

    Get PDF
    The ability to accurately localize both tactile and painful sensations on the body is one of the most important functions of the somatosensory system. Most accounts of localization refer to the systematic spatial relation between skin receptors and cortical neurons. The topographic organization of somatosensory neurons in the brain provides a map of the sensory surface. However, systematic distortions in perceptual localization tasks suggest that localizing a somatosensory stimulus involves more than simply identifying specific active neural populations within a somatotopic map. Thus, perceptual localization may depend on both afferent inputs and other unknown factors. In four experiments, we investigated whether localization biases vary according to the specific skin regions and subset of afferent fibers stimulated. We represented localization errors as a ‘perceptual map’ of skin locations. We compared the perceptual maps of stimuli that activate Aβ (innocuous touch), Aδ (pinprick pain), and C fibers (non-painful heat) on both the hairy and glabrous skin of the left hand. Perceptual maps exhibited systematic distortions that strongly depended on the skin region stimulated. We found systematic distal and radial (i.e., towards the thumb) biases in localization of touch, pain, and heat on the hand dorsum. A less consistent proximal bias was found on the palm. These distortions were independent of the population of afferent fibers stimulated, and also independent of the response modality used to report localization. We argue that these biases are likely to have a central origin, and result from a supramodal representation of the body surface

    Web-based haptic applications for blind people to create virtual graphs

    Get PDF
    Haptic technology has great potentials in many applications. This paper introduces our work on delivery haptic information via the Web. A multimodal tool has been developed to allow blind people to create virtual graphs independently. Multimodal interactions in the process of graph creation and exploration are provided by using a low-cost haptic device, the Logitech WingMan Force Feedback Mouse, and Web audio. The Web-based tool also provides blind people with the convenience of receiving information at home. In this paper, we present the development of the tool and evaluation results. Discussions on the issues related to the design of similar Web-based haptic applications are also given

    Comparing two haptic interfaces for multimodal graph rendering

    Get PDF
    This paper describes the evaluation of two multimodal interfaces designed to provide visually impaired people with access to various types of graphs. The interfaces consist of audio and haptics which is rendered on commercially available force feedback devices. This study compares the usability of two force feedback devices: the SensAble PHANToM and the Logitech WingMan force feedback mouse in representing graphical data. The type of graph used in the experiment is the bar chart under two experimental conditions: single mode and multimodal. The results show that PHANToM provides better performance in the haptic only condition. However, no significant difference has been found between the two devices in the multimodal condition. This has confirmed the advantages of using multimodal approach in our research and that low-cost haptic devices can be successful. This paper introduces our evaluation approach and discusses the findings of the experiment
    corecore