247,492 research outputs found

    Virtual Hand Representations to Support Natural Interaction in Immersive Environment

    Get PDF
    Immersive Computing Technology (ICT) offers designers the unique ability to evaluate human interaction with product design concepts through the use of stereo viewing and 3D position tracking. These technologies provide designers with opportunities to create virtual simulations for numerous different applications. In order to support the immersive experience of a virtual simulation, it is necessary to employ interaction techniques that are appropriately mapped to specific tasks. Numerous methods for interacting in various virtual applications have been developed which use wands, game controllers, and haptic devices. However, if the intent of the simulation is to gather information on how a person would interact in an environment, more natural interaction paradigms are needed. The use of 3D hand models coupled with position-tracked gloves provide for intuitive interactions in virtual environments. This paper presents several methods of representing a virtual hand model in the virtual environment to support natural interaction

    SPATIAL SOUND SYSTEM TO AID INTERACTIVITY IN A HUMAN CENTRED DESIGN EVALUATION OF AN AIRCRAFT CABIN ENVIRONMENT

    Get PDF
    There is a lot of research towards the concept of 3D sound in virtual reality environments. With the incipient growth in the significance of designing more realistic and immersive experiences for a Human Centred Design (HCD) approach, sound perception is believed to add an interactive element in maximizing the human perspective. In this context, the concept of an audio-visual interaction model between a passenger and a crew member in an immersive aircraft cabin environment is studied and presented in this paper. The study focuses on the design and usability of spatial sources as an interactive component in a regional aircraft cabin design for Human in the Loop evaluation. Sound sources are placed among the virtual manikins acting as passengers with the aim of building a realistic virtual environment for the user enacting the role of a crew member. The crew member, while walking throughthe cabin can orient and identify the position of the sound source inside the immersive Cabin environment. We review the 3D sound approaches and cues for sound spatialization in a virtual environment and propose that audio-visual interactivity aids the immersive Human centred design analysis

    Understanding learning skills in online learning environments by higher education students

    Get PDF
    Can virtual environments promote learning skills such that higher education students understand them? This paper examines the impact of new online educational scenarios as to how self-learning skills are perceived. The research covered 277 higher education students grouped into classrooms, and their tutoring included an online learning component. At the end of the academic semester, students responded to a range of self-learning skills adapted to learning in virtual environments. All participants attended Social and Human Sciences course units in higher education, in different institutions, respectively the State Public University and Private Polytechnic institutions. The results of the study show that virtual learning environments, anchored in a design focused on the development of skills and in a teaching model based on the principles of constructivism, autonomy and interaction can be positive in how higher education students perceive learning skills, according to the following dimensions: Active Learning, Learning Initiative and Autonomy. The study examines the implications of the findings, from the perspective of both the practical intervention and the reflection on the future of educational processes.Fundação para a Ciência e Tecnologi

    Affordances In The Design Of Virtual Environments

    Get PDF
    Human-computer interaction design principles largely focus on static representations and have yet to fully incorporate theories of perception appropriate for the dynamic multimodal interactions inherent to virtual environment (VE) interaction. Theories of direct perception, in particular affordance theory, may prove particularly relevant to enhancing VE interaction design. The present research constructs a conceptual model of how affordances are realized in the natural world and how lack of sensory stimuli may lead to realization failures in virtual environments. Implications of the model were empirically investigated by examining three affordances: passability, catchability, and flyability. The experimental design involved four factors for each of the three affordances and was implemented as a 2 [subscript IV] [superscript 4-1] fractional factorial design. The results demonstrated that providing affording cues led to behavior closely in-line with real-world behavior. More specifically, when given affording cues participants tended to rotate their virtual bodies when entering narrow passageways, accurately judge balls as catchable, and fly when conditions warranted it. The results support the conceptual model and demonstrate 1) that substituting designed cues via sensory stimuli in available sensory modalities for absent or impoverished modalities may enable the perception of affordances in VEs; 2) that sensory stimuli substitutions provide potential approaches for enabling the perception of affordances in a VE which in the real world are cross-modal; and 3) that affordances relating to specific action capabilities may be enabled by designed sensory stimuli. This research lays an empirical foundation for a science of VE design based on choosing and implementing design properties so as to evoke targeted user behavio

    Human Factors Virtual Analysis Techniques for NASA's Space Launch System Ground Support using MSFC's Virtual Environments Lab (VEL)

    Get PDF
    Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL

    Modeling Three-Dimensional Interaction Tasks for Desktop Virtual Reality

    Get PDF
    A virtual environment is an interactive, head-referenced computer display that gives a user the illusion of presence in real or imaginary worlds. Two most significant differences between a virtual environment and a more traditional interactive 3D computer graphics system are the extent of the user's sense of presence and the level of user participation that can be obtained in the virtual environment. Over the years, advances in computer display hardware and software have substantially progressed the realism of computer-generated images, which dramatically enhanced user’s sense of presence in virtual environments. Unfortunately, such progress of user’s interaction with a virtual environment has not been observed. The scope of the thesis lies in the study of human-computer interaction that occurs in a desktop virtual environment. The objective is to develop/verify 3D interaction models that can be used to quantitatively describe users’ performance for 3D pointing, steering and object pursuit tasks and through the analysis of the interaction models and experimental results to gain a better understanding of users’ movements in the virtual environment. The approach applied throughout the thesis is a modeling methodology that is composed of three procedures, including identifying the variables involved for modeling a 3D interaction task, formulating and verifying the interaction model through user studies and statistical analysis, and applying the model to the evaluation of interaction techniques and input devices and gaining an insight into users’ movements in the virtual environment. In the study of 3D pointing tasks, a two-component model is used to break the tasks into a ballistic phase and a correction phase, and comparison is made between the real-world and virtual-world tasks in each phase. The results indicate that temporal differences arise in both phases, but the difference is significantly greater in the correction phase. This finding inspires us to design a methodology with two-component model and Fitts’ law, which decomposes a pointing task into the ballistic and correction phase and decreases the index of the difficulty of the task during the correction phase. The methodology allows for the development and evaluation of interaction techniques for 3D pointing tasks. For 3D steering tasks, the steering law, which was proposed to model 2D steering tasks, is adapted to 3D tasks by introducing three additional variables, i.e., path curvature, orientation and haptic feedback. The new model suggests that a 3D ball-and-tunnel steering movement consists of a series of small and jerky sub-movements that are similar to the ballistic/correction movements observed in the pointing movements. An interaction model is originally proposed and empirically verified for 3D object pursuit tasks, making use of Stevens’ power law. The results indicate that the power law can be used to model all three common interaction tasks, which may serve as a general law for modeling interaction tasks, and also provides a way to quantitatively compare the tasks

    A Simulation Approach Analyzing Random Motion Events Between A Machine And Its Operator

    Get PDF
    This paper presents an approach for representing and analyzing random motions and hazardous events in a simulated three-dimensional workplace, providing designers and analysts with a new technique for evaluating operator-machine interaction hazards in virtual environments. Technical data in this paper is based upon a project striving to reduce workers' risks from being hit by underground mining machinery in a confined space. The project's methodology includes human factors design considerations, ergonomic modeling and simulation tools, laboratory validation, and collaboration with a mining equipment manufacturer. Hazardous conditions can be analyzed in virtual environments using collision detection. By simulating an operator's random behavior and machine's appendage velocity, researchers can accurately identify hazards, and use that information to form safe design parameters for mining equipment. Analysts must be discerning with the model and not read more from the databases than what the simulation model was designed to deliver. Simulations provided an interesting approach to data gathering in that there was no need for live subjects and logistics - test sites and costs associated with experiments-became insignificant. Collisions versus speed, operators' size, and risk behaviors proved the versatility found in the data obtained from the model. Preliminary results show that response time significantly affects the number of collisions experienced by the virtual subject. Also simulation data suggests that more mishaps occur with hand-on-boom-arm risk behavior
    corecore