829 research outputs found

    "What was Molyneux's Question A Question About?"

    Get PDF
    Molyneux asked whether a newly sighted person could distinguish a sphere from a cube by sight alone, given that she was antecedently able to do so by touch. This, we contend, is a question about general ideas. To answer it, we must ask (a) whether spatial locations identified by touch can be identified also by sight, and (b) whether the integration of spatial locations into an idea of shape persists through changes of modality. Posed this way, Molyneux’s Question goes substantially beyond question (a), about spatial locations, alone; for a positive answer to (a) leaves open whether a perceiver might cross-identify locations, but not be able to identify the shapes that collections of locations comprise. We further emphasize that MQ targets general ideas so as to distinguish it from corresponding questions about experiences of shape and about the property of tangible (vs. visual) shape. After proposing a generalized formulation of MQ, we extend earlier work (“Many Molyneux Questions,” Australasian Journal of Philosophy 2020) by showing that MQ does not admit a single answer across the board. Some integrative data-processes transfer across modalities; others do not. Seeing where and how such transfer succeeds and fails in individual cases has much to offer to our understanding of perception and its modalities

    What was Molyneux's Question A Question About?

    Get PDF
    Molyneux asked whether a newly sighted person could distinguish a sphere from a cube by sight alone, given that she was antecedently able to do so by touch. This, we contend, is a question about general ideas. To answer it, we must ask (a) whether spatial locations identified by touch can be identified also by sight, and (b) whether the integration of spatial locations into an idea of shape persists through changes of modality. Posed this way, Molyneux’s Question goes substantially beyond question (a), about spatial locations, alone; for a positive answer to (a) leaves open whether a perceiver might cross-identify locations, but not be able to identify the shapes that collections of locations comprise. We further emphasize that MQ targets general ideas so as to distinguish it from corresponding questions about experiences of shape and about the property of tangible (vs. visual) shape. After proposing a generalized formulation of MQ, we extend earlier work (“Many Molyneux Questions,” Australasian Journal of Philosophy 2020) by showing that MQ does not admit a single answer across the board. Some integrative data-processes transfer across modalities; others do not. Seeing where and how such transfer succeeds and fails in individual cases has much to offer to our understanding of perception and its modalities

    Haptic perception in virtual reality in sighted and blind individuals

    Get PDF
    The incorporation of the sense of touch into virtual reality is an exciting development. However, research into this topic is in its infancy. This experimental programme investigated both the perception of virtual object attributes by touch and the parameters that influence touch perception in virtual reality with a force feedback device called the PHANTOM (TM) (www.sensable.com). The thesis had three main foci. Firstly, it aimed to provide an experimental account of the perception of the attributes of roughness, size and angular extent by touch via the PHANTOM (TM) device. Secondly, it aimed to contribute to the resolution of a number of other issues important in developing an understanding of the parameters that exert an influence on touch in virtual reality. Finally, it aimed to compare touch in virtual reality between sighted and blind individuals. This thesis comprises six experiments. Experiment one examined the perception of the roughness of virtual textures with the PHANTOM (TM) device. The effect of the following factors was addressed: the groove width of the textured stimuli; the endpoint used (stylus or thimble) with the PHANTOM (TM); the specific device used (PHANTOM (TM) vs. IE3000) and the visual status (sighted or blind) of the participants. Experiment two extended the findings of experiment one by addressing the impact of an exploration related factor on perceived roughness, that of the contact force an individual applies to a virtual texture. The interaction between this variable and the factors of groove width, endpoint, and visual status was also addressed. Experiment three examined the perception of the size and angular extent of virtual 3-D objects via the PHANTOM (TM). With respect to the perception of virtual object size, the effect of the following factors was addressed: the size of the object (2.7,3.6,4.5 cm); the type of virtual object (cube vs. sphere); the mode in which the virtual objects were presented; the endpoint used with the PHANTOM (TM) and the visual status of the participants. With respect to the perception of virtual object angular extent, the effect of the following factors was addressed: the angular extent of the object (18,41 and 64°); the endpoint used with the PHANTOM (TM) and the visual status of the participants. Experiment four examined the perception of the size and angular extent of real counterparts to the virtual 3-D objects used in experiment three. Experiment four manipulated the conditions under which participants examined the real objects. Participants were asked to give judgements of object size and angular extent via the deactivated PHANTOM (TM), a stylus probe, a bare index finger and without any constraints on their exploration. In addition to the above exploration type factor, experiment four examined the impact of the same factors on perceived size and angular extent in the real world as had been examined in virtual reality. Experiments five and six examined the consistency of the perception of linear extent across the 3-D axes in virtual space. Both experiments manipulated the following factors: Line extent (2.7,3.6 and 4.5cm); line dimension (x, y and z axis); movement type (active vs. passive movement) and visual status. Experiment six additionally manipulated the direction of movement within the 3-D axes. Perceived roughness was assessed by the method of magnitude estimation. The perceived size and angular extent of the various virtual stimuli and their real counterparts was assessed by the method of magnitude reproduction. This technique was also used to assess perceived extent across the 3-D axes. Touch perception via the PHANTOM (TM) was found to be broadly similar for sighted and blind participants. Touch perception in virtual reality was also found to be broadly similar between two different 3-D force feedback devices (the PHANTOM (TM) and the IE3000). However, the endpoint used with the PHANTOM (TM) device was found to exert significant, but inconsistent effects on the perception of virtual object attributes. Touch perception with the PHANTOM (TM) across the 3-D axes was found to be anisotropic in a similar way to the real world, with the illusion that radial extents were perceived as longer than equivalent tangential extents. The perception of 3-D object size and angular extent was found to be comparable between virtual reality and the real world, particularly under conditions where the participants' exploration of the real objects was constrained to a single point of contact. An intriguing touch illusion, whereby virtual objects explored from the inside were perceived to be larger than the same objects perceived from the outside was found to occur widely in virtual reality, in addition to the real world. This thesis contributes to knowledge of touch perception in virtual reality. The findings have interesting implications for theories of touch perception, both virtual and real

    Assessing haptic properties for data representation

    Get PDF
    This paper describes the results of a series of forced choice design experiments investigating the discrimination of material properties using a PHANToM haptic device. Research has shown that the PHANToM is effective at displaying graphical information to blind people, but the techniques used so far have been very simple. Our experiments showed that subjects' discrimination of friction was significantly better than that of stiffness or the spatial period of sinusoidal textures, over the range of stimuli investigated. Thus, it is proposed that graphical data could be made more easily accessible to blind users by scaling the data values to friction rather than shape or size, as in traditional bar charts

    The Speed, Precision and Accuracy of Human Multisensory Perception following Changes to the Visual Sense

    Get PDF
    Human adults can combine information from multiple senses to improve their perceptual judgments. Visual and multisensory experience plays an important role in the development of multisensory integration, however it is unclear to what extent changes in vision impact multisensory processing later in life. In particular, it is not known whether adults account for changes to the relative reliability of their senses, following sensory loss, treatment or training. Using psychophysical methods, this thesis studied the multisensory processing of individuals experiencing changes to the visual sense. Chapters 2 and 3 assessed whether patients implanted with a retinal prosthesis (having been blinded by a retinal degenerative disease) could use this new visual signal with non-visual information to improve their speed or precision on multisensory tasks. Due to large differences between the reliabilities of the visual and non-visual cues, patients were not always able to benefit from the new visual signal. Chapter 4 assessed whether patients with degenerative visual loss adjust the weight given to visual and non-visual cues during audio-visual localization as their relative reliabilities change. Although some patients adjusted their reliance on vision across the visual field in line with predictions based on cue relative reliability, others - patients with visual loss limited to their central visual field only - did not. Chapter 5 assessed whether training with either more reliable or less reliable visual feedback could enable normally sighted adults to overcome an auditory localization bias. Findings suggest that visual information, irrespective of reliability, can be used to overcome at least some non-visual biases. In summary, this thesis documents multisensory changes following changes to the visual sense. The results improve our understanding of adult multisensory plasticity and have implications for successful treatments and rehabilitation following sensory loss

    Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users

    Get PDF
    The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users

    Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information

    Get PDF
    Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments. Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions

    Developing an interactive overview for non-visual exploration of tabular numerical information

    Get PDF
    This thesis investigates the problem of obtaining overview information from complex tabular numerical data sets non-visually. Blind and visually impaired people need to access and analyse numerical data, both in education and in professional occupations. Obtaining an overview is a necessary first step in data analysis, for which current non-visual data accessibility methods offer little support. This thesis describes a new interactive parametric sonification technique called High-Density Sonification (HDS), which facilitates the process of extracting overview information from the data easily and efficiently by rendering multiple data points as single auditory events. Beyond obtaining an overview of the data, experimental studies showed that the capabilities of human auditory perception and cognition to extract meaning from HDS representations could be used to reliably estimate relative arithmetic mean values within large tabular data sets. Following a user-centred design methodology, HDS was implemented as the primary form of overview information display in a multimodal interface called TableVis. This interface supports the active process of interactive data exploration non-visually, making use of proprioception to maintain contextual information during exploration (non-visual focus+context), vibrotactile data annotations (EMA-Tactons) that can be used as external memory aids to prevent high mental workload levels, and speech synthesis to access detailed information on demand. A series of empirical studies was conducted to quantify the performance attained in the exploration of tabular data sets for overview information using TableVis. This was done by comparing HDS with the main current non-visual accessibility technique (speech synthesis), and by quantifying the effect of different sizes of data sets on user performance, which showed that HDS resulted in better performance than speech, and that this performance was not heavily dependent on the size of the data set. In addition, levels of subjective workload during exploration tasks using TableVis were investigated, resulting in the proposal of EMA-Tactons, vibrotactile annotations that the user can add to the data in order to prevent working memory saturation in the most demanding data exploration scenarios. An experimental evaluation found that EMA-Tactons significantly reduced mental workload in data exploration tasks. Thus, the work described in this thesis provides a basis for the interactive non-visual exploration of a broad range of sizes of numerical data tables by offering techniques to extract overview information quickly, performing perceptual estimations of data descriptors (relative arithmetic mean) and managing demands on mental workload through vibrotactile data annotations, while seamlessly linking with explorations at different levels of detail and preserving spatial data representation metaphors to support collaboration with sighted users

    Making Graphical Information Accessible Without Vision Using Touch-based Devices

    Get PDF
    Accessing graphical material such as graphs, figures, maps, and images is a major challenge for blind and visually impaired people. The traditional approaches that have addressed this issue have been plagued with various shortcomings (such as use of unintuitive sensory translation rules, prohibitive costs and limited portability), all hindering progress in reaching the blind and visually-impaired users. This thesis addresses aspects of these shortcomings, by designing and experimentally evaluating an intuitive approach —called a vibro-audio interface— for non-visual access to graphical material. The approach is based on commercially available touch-based devices (such as smartphones and tablets) where hand and finger movements over the display provide position and orientation cues by synchronously triggering vibration patterns, speech output and auditory cues, whenever an on-screen visual element is touched. Three human behavioral studies (Exp 1, 2, and 3) assessed usability of the vibro-audio interface by investigating whether its use leads to development of an accurate spatial representation of the graphical information being conveyed. Results demonstrated efficacy of the interface and importantly, showed that performance was functionally equivalent with that found using traditional hardcopy tactile graphics, which are the gold standard of non-visual graphical learning. One limitation of this approach is the limited screen real estate of commercial touch-screen devices. This means large and deep format graphics (e.g., maps) will not fit within the screen. Panning and zooming operations are traditional techniques to deal with this challenge but, performing these operations without vision (i.e., using touch) represents several computational challenges relating both to cognitive constraints of the user and technological constraints of the interface. To address these issues, two human behavioral experiments were conducted, that assessed the influence of panning (Exp 4) and zooming (Exp 5) operations in non-visual learning of graphical material and its related human factors. Results from experiments 4 and 5 indicated that the incorporation of panning and zooming operations enhances the non-visual learning process and leads to development of more accurate spatial representation. Together, this thesis demonstrates that the proposed approach —using a vibro-audio interface— is a viable multimodal solution for presenting dynamic graphical information to blind and visually-impaired persons and supporting development of accurate spatial representations of otherwise inaccessible graphical materials

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation
    • …
    corecore