39 research outputs found

    MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

    Full text link
    Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.Comment: Accepted IEEE VR 2023 conference pape

    The effect of social context on the use of visual information

    Get PDF
    Social context modulates action kinematics. Less is known about whether social context also affects the use of task relevant visual information. We tested this hypothesis by examining whether the instruction to play table tennis competitively or cooperatively affected the kind of visual cues necessary for successful table tennis performance. In two experiments, participants played table tennis in a dark room with only the ball, net, and table visible. Visual information about both players’ actions was manipulated by means of self-glowing markers. We recorded the number of successful passes for each player individually. The results showed that participants’ performance increased when their own body was rendered visible in both the cooperative and the competitive condition. However, social context modulated the importance of different sources of visual information about the other player. In the cooperative condition, seeing the other player’s racket had the largest effects on performance increase, whereas in the competitive condition, seeing the other player’s body resulted in the largest performance increase. These results suggest that social context selectively modulates the use of visual information about others’ actions in social interactions

    The influence of different sources of visual information on joint action performance

    No full text
    Humans are social beings and they often act jointly together with other humans (joint actions) rather than alone. Prominent theories of joint action agree on visual information being critical for successful joint action coordination but are vague about the exact source of visual information being used during a joint action. Knowing which sources of visual information are used, however, is important for a more detailed characterization of the functioning of action coordination in joint actions. The current Ph.D. research examines the importance of different sources of visual information on joint action coordination under realistic settings. In three studies I examined the influence of different sources of visual information (Study 1), the functional role of different sources of visual information (Study 2), and the effect of social context on the use of visual information (Study 3) in a table tennis game. The results of these studies revealed that (1) visual anticipation of the interaction partner and the interaction object is critical in natural joint actions, (2) different sources of visual information are critical at different temporal phases during the joint action, and (3) the social context modulates the importance of different sources of visual information. In sum, this work provides important and new empirical evidence about the importance of different sources of visual information in close-to-natural joint actions

    Virtual reality as a tool for balance research : Eyes open body sway is reproduced in photo-realistic, but not in abstract virtual scenes

    No full text
    Virtual reality (VR) technology is commonly used in balance research due to its ability to simulate real world experiences under controlled experimental conditions. However, several studies reported considerable differences in balance behavior in real world environments as compared to virtual environments presented in a head mounted display. Most of these studies were conducted more than a decade ago, at a time when VR was still struggling with major technical limitations (delays, limited field-of-view, etc.). In the meantime, VR technology has progressed considerably, enhancing its capacity to induce the feeling of presence and behavioural realism. In this study, we addressed two questions: Has VR technology now reached a point where balance is similar in real and virtual environments? And does the integration of visual cues for balance depend on the subjective experience of presence? We used a state-of-the-art head mounted VR system and a custom-made balance platform to compare balance when viewing (1) a real-world environment, (2) a photo-realistic virtual copy of the real-world environment, (3) an abstract virtual environment consisting of only spheres and bars ('low presence' VR condition), and, as reference, (4) a condition with eyes closed. Body sway of ten participants was measured in three different support surface conditions: (A) quiet stance, (B) stance on a sway referenced surface, and (C) surface tilting following a pseudo-random sequence. A 2-level repeated measures ANOVA and PostHoc analyses revealed no significant differences in body sway between viewing the real world environment and the photo-realistic virtual copy. In contrast, body sway was increased in the 'low presence' abstract scene and further increased with eyes closed. Results were consistent across platform conditions. Our results support the hypothesis that state of the art VR reached a point of behavioural realism in which balance in photo-realistic VR is similar to balance in a real environment. Presence was lower in the abstract virtual condition as compared to the photo-realistic condition as measured by the IPQ presence questionnaire. Thus, our results indicate that spatial presence may be a moderating factor, but further research is required to confirm this notion. We conceive that virtual reality is a valid tool for balance research, but that the properties of the virtual environment affects results.publishe

    The Influence of Visual Information on the Motor Control of Table Tennis Strokes

    No full text
    Theories of social interaction (i.e., common coding theory) suggest that visual information about the interaction partner is critical for successful interpersonal action coordination. Seeing the interaction partner allows an observer to understand and predict the interaction partner's behavior. However, it is unknown which of the many sources of visual information about an interaction partner (e.g., body, end effectors, and/or interaction objects) are used for action understanding and thus for the control of movements in response to observed actions. We used a novel immersive virtual environment to investigate this further. Specifically, we asked participants to perform table tennis strokes in response to table tennis balls stroked by a virtual table tennis player. We tested the effect of the visibility of the ball, the paddle, and the body of the virtual player on task performance and movement kinematics. Task performance was measured as the minimum distance between the center of the paddle and the center of the ball (radial error). Movement kinematics was measured as variability in the paddle speed of repeatedly executed table tennis strokes (stroke speed variability). We found that radial error was reduced when the ball was visible compared to invisible. However, seeing the body and/or the racket of the virtual players only reduced radial error when the ball was invisible. There was no influence of seeing the ball on stroke speed variability. However, we found that stroke speed variability was reduced when either the body or the paddle of the virtual player was visible. Importantly, the differences in stroke speed variability were largest in the moment when the virtual player hit the ball. This suggests that seeing the virtual player's body or paddle was important for preparing the stroke response. These results demonstrate for the first time that the online control of arm movements is coupled with visual body information about an opponent

    IMVEST, an immersive multimodal virtual environment stress test for humans that adjusts challenge to individual's performance

    No full text
    Laboratory stressors are essential tools to study the human stress response. However, despite considerable progress in the development of stress induction procedures in recent years, the field is still missing standardization and the methods employed frequently require considerable personnel resources. Virtual reality (VR) offers flexible solutions to these problems, but available VR stress-induction tests still contain important sources of variation that challenge data interpretation. One of the major drawbacks is that tasks based on motivated performance do not adapt to individual abilities. Here, we provide open access to, and present, a novel and standardized immersive multimodal virtual environment stress test (IMVEST) in which participants are simultaneously exposed to mental -arithmetic calculations- and environmental challenges, along with intense visual and auditory stimulation. It contains critical elements of stress elicitation – perceived threat to physical self, social-evaluative threat and negative feedback, uncontrollability and unpredictability – and adjusts mathematical challenge to individual's ongoing performance. It is accompanied by a control VR scenario offering a comparable but not stressful situation. We validate and characterize the stress response to IMVEST in one-hundred-and-eighteen participants. Both cortisol and a wide range of autonomic nervous system (ANS) markers – extracted from the electrocardiogram, electrodermal activity and respiration – are significantly affected. We also show that ANS features can be used to train a stress prediction machine learning model that strongly discriminates between stress and control conditions, and indicates which aspects of IMVEST affect specific ANS components.publishe

    Putting Actions in Context: Visual Action Adaptation Aftereffects Are Modulated by Social Contexts

    Get PDF
    The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants ’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action

    Red shape, blue shape: political ideology influences the social perception of body shape

    No full text
    Abstract Political elections have a profound impact on individuals and societies. Optimal voting is thought to be based on informed and deliberate decisions yet, it has been demonstrated that the outcomes of political elections are biased by the perception of candidates’ facial features and the stereotypical traits voters attribute to these. Interestingly, political identification changes the attribution of stereotypical traits from facial features. This study explores whether the perception of body shape elicits similar effects on political trait attribution and whether these associations can be visualized. In Experiment 1, ratings of 3D body shapes were used to model the relationship between perception of 3D body shape and the attribution of political traits such as ‘Republican’, ‘Democrat’, or ‘Leader’. This allowed analyzing and visualizing the mental representations of stereotypical 3D body shapes associated with each political trait. Experiment 2 was designed to test whether political identification of the raters affected the attribution of political traits to different types of body shapes. The results show that humans attribute political traits to the same body shapes differently depending on their own political preference. These findings show that our judgments of others are influenced by their body shape and our own political views. Such judgments have potential political and societal implications

    The effect of social context on the use of visual information

    No full text
    Social context modulates action kinematics. Less is known about whether social context also affects the use of task relevant visual information. We tested this hypothesis by examining whether the instruction to play table tennis competitively or cooperatively affected the kind of visual cues necessary for successful table tennis performance. In two experiments, participants played table tennis in a dark room with only the ball, net, and table visible. Visual information about both players' actions was manipulated by means of self-glowing markers. We recorded the number of successful passes for each player individually. The results showed that participants' performance increased when their own body was rendered visible in both the cooperative and the competitive condition. However, social context modulated the importance of different sources of visual information about the other player. In the cooperative condition, seeing the other player's racket had the largest effects on performance increase, whereas in the competitive condition, seeing the other player's body resulted in the largest performance increase. These results suggest that social context selectively modulates the use of visual information about others' actions in social interactions.http://www.springerlink.com/content/b014430023h47417/Published versio
    corecore