6,061 research outputs found

    An environment for studying the impact of spatialising sonified graphs on data comprehension

    Get PDF
    We describe AudioCave, an environment for exploring the impact of spatialising sonified graphs on a set of numerical data comprehension tasks. Its design builds on findings regarding the effectiveness of sonified graphs for numerical data overview and discovery by visually impaired and blind students. We demonstrate its use as a test bed for comparing the approach of accessing a single sonified numerical datum at a time to one where multiple sonified numerical data can be accessed concurrently. Results from this experiment show that concurrent access facilitates the tackling of our set multivariate data comprehension tasks. AudioCave also demonstrates how the spatialisation of the sonified graphs provides opportunities for sharing the representation. We present two experiments investigating users solving set data comprehension tasks collaboratively by sharing the data representation

    Taxonomising the senses

    Get PDF
    I argue that we should reject the sparse view that there are or could be only a small number of rather distinct senses. When one appreciates this then one can see that there is no need to choose between the standard criteria that have been proposed as ways of individuating the senses – representation, phenomenal character, proximal stimulus and sense organ – or any other criteria that one may deem important. Rather, one can use these criteria in conjunction to form a finegrained taxonomy of the senses. We can think of these criteria as defining a multidimensional space within which we can locate each of the senses that we are familiar with and which also defines the space of possible senses there could be

    The phonetics of second language learning and bilingualism

    Get PDF
    This chapter provides an overview of major theories and findings in the field of second language (L2) phonetics and phonology. Four main conceptual frameworks are discussed and compared: the Perceptual Assimilation Model-L2, the Native Language Magnet Theory, the Automatic Selection Perception Model, and the Speech Learning Model. These frameworks differ in terms of their empirical focus, including the type of learner (e.g., beginner vs. advanced) and target modality (e.g., perception vs. production), and in terms of their theoretical assumptions, such as the basic unit or window of analysis that is relevant (e.g., articulatory gestures, position-specific allophones). Despite the divergences among these theories, three recurring themes emerge from the literature reviewed. First, the learning of a target L2 structure (segment, prosodic pattern, etc.) is influenced by phonetic and/or phonological similarity to structures in the native language (L1). In particular, L1-L2 similarity exists at multiple levels and does not necessarily benefit L2 outcomes. Second, the role played by certain factors, such as acoustic phonetic similarity between close L1 and L2 sounds, changes over the course of learning, such that advanced learners may differ from novice learners with respect to the effect of a specific variable on observed L2 behavior. Third, the connection between L2 perception and production (insofar as the two are hypothesized to be linked) differs significantly from the perception-production links observed in L1 acquisition. In service of elucidating the predictive differences among these theories, this contribution discusses studies that have investigated L2 perception and/or production primarily at a segmental level. In addition to summarizing the areas in which there is broad consensus, the chapter points out a number of questions which remain a source of debate in the field today.https://drive.google.com/open?id=1uHX9K99Bl31vMZNRWL-YmU7O2p1tG2wHhttps://drive.google.com/open?id=1uHX9K99Bl31vMZNRWL-YmU7O2p1tG2wHhttps://drive.google.com/open?id=1uHX9K99Bl31vMZNRWL-YmU7O2p1tG2wHAccepted manuscriptAccepted manuscrip

    Tangible auditory interfaces : combining auditory displays and tangible interfaces

    Get PDF
    Bovermann T. Tangible auditory interfaces : combining auditory displays and tangible interfaces. Bielefeld (Germany): Bielefeld University; 2009.Tangible Auditory Interfaces (TAIs) investigates into the capabilities of the interconnection of Tangible User Interfaces and Auditory Displays. TAIs utilise artificial physical objects as well as soundscapes to represent digital information. The interconnection of the two fields establishes a tight coupling between information and operation that is based on the human's familiarity with the incorporated interrelations. This work gives a formal introduction to TAIs and shows their key features at hand of seven proof of concept applications

    Cognitive landmark research beyond visual cues using GIScience

    Get PDF
    • …
    corecore