5,958 research outputs found

    Can't touch this: the first-person perspective provides privileged access to predictions of sensory action outcomes.

    Get PDF
    RCUK Open Access funded. ESRC ES/J019178/1Previous studies have shown that viewing others in pain activates cortical somatosensory processing areas and facilitates the detection of tactile targets. It has been suggested that such shared representations have evolved to enable us to better understand the actions and intentions of others. If this is the case, the effects of observing others in pain should be obtained from a range of viewing perspectives. Therefore, the current study examined the behavioral effects of observed grasps of painful and nonpainful objects from both a first- and third-person perspective. In the first-person perspective, a participant was faster to detect a tactile target delivered to their own hand when viewing painful grasping actions, compared with all nonpainful actions. However, this effect was not revealed in the third-person perspective. The combination of action and object information to predict the painful consequences of another person's actions when viewed from the first-person perspective, but not the third-person perspective, argues against a mechanism ostensibly evolved to understand the actions of others

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    A Treatment Protocol Utilizing Sensory Integrative Techniques for Treating Self-Mutilation

    Get PDF
    “It is estimated that one to two million people in the United States intentionally and repeatedly bruise, cut, burn, mark, scratch and mutilate different parts of their own bodies. This estimate represents only the adolescents and adults who actually seek help for the behavior” (Ferentz, 2002). The reasons for self-mutilation behaviors span across a considerable range from post-traumatic stress disorder to hypersensitivity. The research indicates parallels between children who have been sexually, physically, or emotionally abused and self-mulitlation. Basically, it is an unhealthy coping strategy to deal with overwhelming and intense feelings. The current treatment regime varies and includes: medication, dialectical behavioral therapy, interpersonal group and talk therapies with the goal focusing on learning healthy coping strategies. There is another approach for consideration: the use of sensory integrative techniques. The research was extremely limited but the fundamental assumptions of sensory integrative therapy lent itself to the challenges of self-regulation and modulation of the sensory system in the individual with self-mutilative behaviors. The purpose of this scholarly project was to review the literature regarding common diagnoses that included self-injurious behaviors and differentiated the purpose and relief the populations were receiving from self-injuring. Based upon the information elicited, a treatment protocol for sensory integrative techniques was developed that could be implemented in a facility for accommodating and recognizing the differences amongst those whose exhibit self-mutilative behaviors. The results include a Sensory Integration Protocol for individuals who engage in self-mutilation. The Protocol includes treatment sessions using sensory integration techniques to reduce the amount of self-injury in clientele focusing on self-regulation and modulation and offer a different approach for consideration in coping strategies

    Perception of patterned vibratory stimulation: An evaluation of the tactile vision substitution system

    Get PDF
    Sensory substitution--The replacing of an impaired sensory channel by a properly functioning one--is possibly best manifested today in attempts to provide visual aids for the blind. The tactile vision substitution system (T.V.S.S.) is an example of one such visual aid. The system presents patterned tactile stimulation to the skin of the observer provided by the output of a closed-circuit television system. Research conducted with congenitally blind Ss in evaluation of the T.V.S.S. has provided useful information concerning the potentialities and limitations of the prototype systems, similarities and differences between tactile and visual perception, and the development of visual perception in the congenitally blind Investigation demonstrated that the congenitally blind Ss can learn to make valid judgements of three-dimensional displays with the T.V.S.S. Such judgements are made on the basis of properties contained in the proximal stimulation properties analogous to the monocular clues of depth presence in vision, such as linear-perspective, apparent elevation in the visual field, size change as a function of distance, occlusion, and texural gradients. Similarities have been noted between judgements made by sighted Ss using vision and by blind Ss using the T.V.S.S. on comparable tasks. A display consisting of two slightly displaced alternating lights is perceived in both situations as a single spot of light moving back-and-forth between two display boundaries. A rotating drum made up of alternate black and white stripes is, when stopped, perceived as briefly moving in the opposite direction. External localization of the source of stimulation also occurs with both sensory inputs. The major differences between the visual and tactile inputs that have been noted have occurred in form recognition tacks. Although blind Ss using the patterned tactile stimulation are able to identify both geometric forms and abstract patterns, accuracy is consistently lower than that of sighted Ss using vision, and the latencies for the blind Ss are significantly longer. It is hypothesized that the longer latencies for the blind Ss using the T.V.S.S. can be accounted for primarily by the need to hand-position the television camera during scanning. A major factor in the lower accuracy for the tactile group is the noted difficulty in detecting and identifying display features located within a mass of stimulation. This difficulty with internal display detail may be a function of sensory inhibition and/or masking. The research findings support a concept of sensory substitution as well as a theory of perception which stresses the modality of many qualities contained in visible displays. Further research is needed to determine the significance of sensor movement--either eye movements or camera manipulation--in the perceptual process

    Sensory coding in supragranular cells of the vibrissal cortex in anesthetized and awake mice

    Get PDF
    Sensory perception entails reliable representation of the external stimuli as impulse activity of individual neurons (i.e. spikes) and neuronal populations in the sensory area. An ongoing challenge in neuroscience is to identify and characterize the features of the stimuli which are relevant to a specific sensory modality and neuronal strategies to effectively and efficiently encode those features. It is widely hypothesized that the neuronal populations employ “sparse coding” strategies to optimize the stimulus representations with a low energetic cost (i.e. low impulse activity). In the past two decades, a wealth of experimental evidence has supported this hypothesis by showing spatiotemporally sparse activity in sensory area. Despite numerous studies, the extent of sparse coding and its underlying mechanisms are not fully understood, especially in primary vibrissal somatosensory cortex (vS1), which is a key model system in sensory neuroscience. Importantly, it is not clear yet whether sparse activation of supragranular vS1 is due to insufficient synaptic input to the majority of the cells or the absence of effective stimulus features. In this thesis, first we asked how the choice of stimulus could affect the degree of sparseness and/or the overall fraction of the responsive vS1 neurons. We presented whisker deflections spanning a broad range of intensities, including “standard stimuli” and a high-velocity, “sharp” stimulus, which simulated the fast slip events that occur during whisker mediated object palpation. We used whole-cell and cell-attached recording and calcium imaging to characterize the neuronal responses to these stimuli. Consistent with previous literature, whole-cell recording revealed a sparse response to the standard range of velocities: although all recorded cells showed tuning to velocity in their postsynaptic potentials, only a small fraction produced stimulus-evoked spikes. In contrast, the sharp stimulus evoked reliable spiking in a large fraction of regular spiking neurons in the supragranular vS1. Spiking responses to the sharp stimulus were binary and precisely timed, with minimum trial-to-trial variability. Interestingly, we also observed that the sharp stimulus produced a consistent and significant reduction in action potential threshold. In the second step we asked whether the stimulus dependent sparse and dense activations we found in anesthetized condition would generalize to the awake condition. We employed cell-attached recordings in head-fixed awake mice to explore the degree of sparseness in awake cortex. Although, stimuli delivered by a piezo-electric actuator evoked significant response in a small fraction of regular spiking supragranular neurons (16%-29%), we observed that a majority of neurons (84%) were driven by manual probing of whiskers. Our results demonstrate that despite sparse activity, the majority of neurons in the superficial layers of vS1 contribute to coding by representing a specific feature of the tactile stimulus. Thesis outline: Chapter 1 provides a review of the current knowledge on sparse coding and an overview of the whisker-sensory pathway. Chapter 2 represents our published results regarding sparse and dense coding in vS1 of anesthetized mice (Ranjbar-Slamloo and Arabzadeh 2017). Chapter 3 represents our pending manuscript with results obtained with piezo and manual stimulation in awake mice. Finally, in Chapter 4 we discuss and conclude our findings in the context of the literature. The appendix provides unpublished results related to Chapter 2. This section is referenced in the final chapter for further discussion

    Artificial Roughness Encoding with a Bio-inspired MEMS- based Tactile Sensor Array

    Get PDF
    A compliant 2×2 tactile sensor array was developed and investigated for roughness encoding. State of the art cross shape 3D MEMS sensors were integrated with polymeric packaging providing in total 16 sensitive elements to external mechanical stimuli in an area of about 20 mm2, similarly to the SA1 innervation density in humans. Experimental analysis of the bio-inspired tactile sensor array was performed by using ridged surfaces, with spatial periods from 2.6 mm to 4.1 mm, which were indented with regulated 1N normal force and stroked at constant sliding velocity from 15 mm/s to 48 mm/s. A repeatable and expected frequency shift of the sensor outputs depending on the applied stimulus and on its scanning velocity was observed between 3.66 Hz and 18.46 Hz with an overall maximum error of 1.7%. The tactile sensor could also perform contact imaging during static stimulus indentation. The experiments demonstrated the suitability of this approach for the design of a roughness encoding tactile sensor for an artificial fingerpad

    Virtual reality:a tool for investigating camouflage

    Get PDF
    Disruptive camouflage utilises high-contrast patches, typically positioned at the margins of an object to impede the detection and/or recognition of a perceiver. To date, the predominant methods for examining camouflage strategies are computer-based (i.e., detection experiments), field-based (e.g., survival analyses) and camouflage choice experiments using dynamically coloured organisms (e.g., cephalopods). Recent advances in virtual reality (VR) technology present the opportunity to create novel environments for testing camouflage theory. VR can combine the control of lab-based research with the ecological validity of field-based studies. Here, we develop an experimental paradigm that enables camouflage testing within a virtual reality environment. The environment comprised a spherical target that can be wrapped with different camouflage patterns and a domed background, upon which a natural image can be projected. Participants were positioned within the centre of the dome and were tasked with finding and shooting at targets randomly positioned across a bounded range within the environment. We manipulated the luminance contrast (0–2 steps of 2.5 L *) of disruptive and edge-enhancement (EE) components of the camouflage patterning to examine their impact on participant response time. Having high, but not extreme, contrast resulted in increased camouflage effectiveness. The EE component had no effect independently but interacted with the DC component. Specifically, when using EE alongside DC, a lower contrast EE component is more effective than a higher contrast EE component. Our results demonstrate that VR is a viable research tool for testing camouflage theory

    Developing a collaborative framework for naturalistic visual search

    Get PDF
    While much research has investigated the mechanisms of visual search behaviour in laboratory-based computer tasks, there has been relatively little work on whether these results generalise to more naturalistic search tasks and thus how well existing theories explain real-world search behaviour. In addition, work relating to this question has often been carried out by researchers working in very different disciplines, including not just vision science but also fields such as consumer behaviour, sports science and medical science, making it more difficult to get an overview of the progress made and open questions remaining. We present findings from a systematic review of real-world visual search, showing that we can group the current literature into theoretical and applied approaches, and that there are certain well-studied topics (e.g., X-ray screening) but that there are relatively few links made across different search tasks and/or search contexts. We also present preliminary work detailing our development of a “naturalistic search task battery”, which aims to provide a suite of open source, reproducible and standardised real-world search tasks, thus enabling the generation of comparable data across multiple studies and aiding theory and modelling in this area
    • 

    corecore