65 research outputs found

    Size and shape constancy in consumer virtual reality

    Get PDF
    With the increase in popularity of consumer virtual reality headsets, for research and other applications, it is important to understand the accuracy of 3D perception in VR. We investigated the perceptual accuracy of near-field virtual distances using a size and shape constancy task, in two commercially available devices. Participants wore either the HTC Vive or the Oculus Rift and adjusted the size of a virtual stimulus to match the geometric qualities (size and depth) of a physical stimulus they were able to refer to haptically. The judgments participants made allowed for an indirect measure of their perception of the egocentric, virtual distance to the stimuli. The data show under-constancy and are consistent with research from carefully calibrated psychophysical techniques. There was no difference in the degree of constancy found in the two headsets. We conclude that consumer virtual reality headsets provide a sufficiently high degree of accuracy in distance perception, to allow them to be used confidently in future experimental vision science, and other research applications in psychology

    Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

    Get PDF
    The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent— that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features)

    Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications

    Get PDF
    LĂŒcking A, Bergmann K, Hahn F, Kopp S, Rieser H. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications. Journal on Multimodal User Interfaces. 2013;7(1-2):5-18.Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gestures’ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on

    Coordination of Syntax and Semantics in Discourse

    Get PDF
    Kindt W, Rieser H. Syntax- und Semantikkoordination im Dialog. Kognitionswissenschaft. 1999;8(3):123-128.Koordination im Dialog bedeutet, daß auftretende Probleme unter wechselseitiger Kontrolle der Agenten nach festgelegten Verfahren gelöst werden. In aufgabenorientierten Dialogen ist der soziale Rahmen fĂŒr Koordination festgelegt. Koordination wird erforderlich wegen des InformationsgefĂ€lles zwischen Instrukteur und Konstrukteur, des dominanten Dialogmusters “Direktive geben“ - “Direktive befolgen“ sowie der Unterschiede in den Bereichen Sprecherontologie, Sprachvariation und Fokussteuerung. Wir geben drei Dialogbeispiele fĂŒr Koordination, bei denen das Koordinationsproblem ĂŒber eine Nebensequenz gelöst wird. Nebensequenzen, sog. “VerstĂ€ndigungssequenzen“, können als klar begrenzter, autonomer Subdialog realisiert oder in die AusgangsĂ€ußerung eingefĂŒgt sein. Eine Syntax fĂŒr den Dialog, insbesondere fĂŒr Nebensequenzen und deren Einbettung, muß spezielle Konstruktionen wie Ausklammerung und Nachtrag erfassen, inkrementell operieren und zwischen Produktion und Rezeption hin- und herschalten können. Über alle diese FĂ€higkeiten sollte ein Situierter KĂŒnstlicher Kommunikator verfĂŒgen. Es wird vorgeschlagen, das dazu benötigte Grammatikmodell im Rahmen einer Theorie der Kooperativen Mehr-Personen-Spiele zu konzipieren
    • 

    corecore