618 research outputs found

    Impact of model fidelity in factory layout assessment using immersive discrete event simulation

    Get PDF
    Discrete Event Simulation (DES) can help speed up the layout design process. It offers further benefits when combined with Virtual Reality (VR). The latest technology, Immersive Virtual Reality (IVR), immerses users in virtual prototypes of their manufacturing plants to-be, potentially helping decision-making. This work seeks to evaluate the impact of visual fidelity, which refers to the degree to which objects in VR conforms to the real world, using an IVR visualisation of the DES model of an actual shop floor. User studies are performed using scenarios populated with low- and high-fidelity models. Study participant carried out four tasks representative of layout decision-making. Limitations of existing IVR technology was found to cause motion sickness. The results indicate with the particular group of naĂŻve modellers used that there is no significant difference in benefits between low and high fidelity, suggesting that low fidelity VR models may be more cost-effective for this group

    On the application of extended reality technologies for the evaluation of product characteristics during the initial stages of the product development process

    Get PDF
    [EN] Fast-growing global markets are forcing companies to continuously re-assess customer needs when designing new products. Product evaluation is a critical task to ensure success, but it can require significant financial and time investments. From an end-user standpoint, consumers must also evaluate multiple design options before purchasing a product, which is often a complex process, especially in online environments where traditional formats coexist with more sophisticated media. Modern extended reality technologies have become an effective tool for product assessment in professional design environments as well as a powerful mechanism for consumers during decision making activities. However, the modality used to view and evaluate the product may affect the perceptual response and thus the user¿s overall evaluation. In this paper, we examine the influence of visual media in product assessment using different designs of a particular product typology. We discuss two studies where a group of participants used the semantic differential technique to evaluate four chair designs displayed in three different media. In our first study, participants used simultaneous evaluation to assess the products as presented in photographs, a non-immersive environment, and an Augmented Reality (AR) experience. In the second study, participants evaluated the product separately as viewed in non-photorealistic rendering, AR, and virtual reality (VR). We used the ¿Aligned Rank Transform¿ proceedings to find differences between groups for the semantic scales, the overall evaluation, the purchasing decision, and the response confidence. Our results show that visual media influences product perception. Certain characteristics in Jordan's physio-pleasure category are particularly significant as perceptual differences are more pronounced. Immersive media can highlight some product attributes and a joint evaluation can help minimize these differences.The authors would like to thank the team at Clon Digital for providing us with a software license to perform the experiment, and students Jenny Trieu, Abizer Raja, Arturo Barrera, and Carrah Kaijser from the University of Houston for the inspiration for the chair designs used in our study.Palacios-Ibáñez, A.; Navarro-Martínez, R.; Blasco-Esteban, J.; Contero, M.; Dorribo-Camba, J. (2023). On the application of extended reality technologies for the evaluation of product characteristics during the initial stages of the product development process. Computers in Industry. 144. https://doi.org/10.1016/j.compind.2022.10378014

    Recalibration of Perceived Distance in Virtual Environments Occurs Rapidly and Transfers Asymmetrically Across Scale

    Get PDF
    Distance in immersive virtual reality is commonly underperceived relative to intended distance, causing virtual environments to appear smaller than they actually are. However, a brief period of interaction by walking through the virtual environment with visual feedback can cause dramatic improvement in perceived distance. The goal of the current project was to determine how quickly improvement occurs as a result of walking interaction (Experiment 1) and whether improvement is specific to the distances experienced during interaction, or whether improvement transfers across scales of space (Experiment 2). The results show that five interaction trials resulted in a large improvement in perceived distance, and that subsequent walking interactions showed continued but diminished improvement. Furthermore, interaction with near objects (1-2 m) improved distance perception for near but not far (4-5 m) objects, whereas interaction with far objects broadly improved distance perception for both near and far objects. These results have practical implications for ameliorating distance underperception in immersive virtual reality, as well as theoretical implications for distinguishing between theories of how walking interaction influences perceived distance

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    The Effect of Environmental Features, Self-Avatar, and Immersion on Object Location Memory in Virtual Environments

    Get PDF
    One potential application for virtual environments (VEs) is the training of spatial knowledge. A critical question is what features the VE should have in order to facilitate this training. Previous research has shown that people rely on environmental features, such as sockets and wall decorations, when learning object locations. The aim of this study is to explore the effect of varied environmental feature fidelity of VEs, the use of self-avatars, and the level of immersion on object location learning and recall. Following a between-subjects experimental design, participants were asked to learn the location of three identical objects by navigating one of the three environments: a physical laboratory or low and high detail VE replicas of this laboratory. Participants who experienced the VEs could use either a head-mounted display (HMD) or a desktop computer. Half of the participants learning in the HMD and desktop systems were assigned a virtual body. Participants were then asked to place physical versions of the three objects in the physical laboratory in the same configuration. We tracked participant movement, measured object placement, and administered a questionnaire related to aspects of the experience. HMD learning resulted in statistically significant higher performance than desktop learning. Results indicate that, when learning in low detail VEs, there is no difference in performance between participants using HMD and desktop systems. Overall, providing the participant with a virtual body had a negative impact on performance. Preliminary inspection of navigation data indicates that spatial learning strategies are different in systems with varying levels of immersion

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Video Manipulation Techniques for the Protection of Privacy in Remote Presence Systems

    Full text link
    Systems that give control of a mobile robot to a remote user raise privacy concerns about what the remote user can see and do through the robot. We aim to preserve some of that privacy by manipulating the video data that the remote user sees. Through two user studies, we explore the effectiveness of different video manipulation techniques at providing different types of privacy. We simultaneously examine task performance in the presence of privacy protection. In the first study, participants were asked to watch a video captured by a robot exploring an office environment and to complete a series of observational tasks under differing video manipulation conditions. Our results show that using manipulations of the video stream can lead to fewer privacy violations for different privacy types. Through a second user study, it was demonstrated that these privacy-protecting techniques were effective without diminishing the task performance of the remote user.Comment: 14 pages, 8 figure
    • …
    corecore