1,594 research outputs found

    The Reality of Virtual Environments: WPE II Paper

    Get PDF
    Recent advances in computer technology have made it now possible to create and display three-dimensional virtual environments for real-time exploration and interaction by a user. This paper surveys some of the research done in this field at such places as: NASA\u27s Ames Research Center, MIT\u27s Media Laboratory, The University of North Carolina at Chapel Hill, and the University of New Brunswick. Limitations to the reality of these simulations will be examined, focusing on input and output devices, computational complexity, as well as tactile and visual feedback

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    Interplayable surface: an exploration on augmented GUI that co-exists with physical environments

    Get PDF
    The main goal of this experiment-driven thesis is to envision and design an interactive GUI1(graphic user interface) that coexists with physical surfaces. Based on an understanding of user behavioral patterns for getting access to information in these types of situations, experimentations and prototypes are implemented and tested with participants. In particular, to observe the user behavioral pattern for augmented GUI within certain environments and circumstances, this thesis presents several types of participatory experimentations with physical GUIs. The experiment participants were encouraged to participate in re-creates and reorganizes physical GUI, relating to their own situational specificity or informational tendencies they have. Based on extracted insights from research and experiments, in the last phase, I propose two thesis models about how interactive GUI applies to a physical environment: simulation mock-ups for user scenarios of augmented GUI and interactive GUI surface combined with projection mapping. Related to people’s behavioral patterns on augmented GUI, the thesis models will show several types of information structures and interactions. Also, in framing the overall data structure and wireframe for the thesis product model, informative affordance corresponding with users’ situational specificity2 is considered as a crucial direction point, actualized on an artifact in a perceptible way. Through experimentally prototyping a thesis model, consequently, I would like to expand the speculative usability interactive GUI will feature in the near future

    Visualising software in cyberspace

    Get PDF
    The problems of maintaining software systems are well documented. The increasing size and complexity of modern software serves only to worsen matters. Software maintainers are typically confronted with very large and very complex software systems, of which they may have little or no prior knowledge. At this stage they will normally have some maintenance task to perform, though possibly little indication of where or how to start. They need to investigate and understand the software to some extent in order to begin maintenance. This understanding process is termed program comprehension. There are various theories on program comprehension, many of which put emphasis on the construction of a mental model of the software within the mind of the maintainor. These same theories hypothesise a number of techniques employed by the maintainer for the creation and revision of this mental model. Software visualisation attempts to provide tool support for generating, supplementing and verifying the maintainer’s mental model. The majority of software visualisations to date have concentrated on producing two dimensional representations and animations of various aspects of a software system. Very little work has been performed previously regarding the issues involved in visualising software within a virtual reality environment. This research represents a significant first step into this exciting field and offers insight into the problems posed by this new media. This thesis provides an identification of the possibilities afforded byU3D graphics for software visualisation and program comprehension. It begins by defining seven key areas of 3D software visualisation, followed by the definition of two terms, visualisation and representation. These two terms provide a conceptual division between a visualisation and the elements of which it is comprised. This division enables improved discussion of the properties of a 3D visualisation and particularly the idenfification of properties that are desirable for a successful visualisation. A number of such desirable properties are suggested for both visualisations and representations, providing support for the design and evaluation of a 3D software visualisation system. Also presented are a number of prototype visualisations, each providing a different approach to the visualisation of a software system. The prototypes help demonstrate the practicalities and feasibility of 3D software visualisation. Evaluation of these prototypes is performed using a variety of techniques, the results of which emphasise the fact that there is substantial potential for the application of 3D graphics and virtual reality to software visualisation

    Designing multi-sensory displays for abstract data

    Get PDF
    The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    How are you Feeling? Inferring Emotions through Movements in the Metaverse

    Get PDF
    Metaverses are immersive virtual worlds in which people interact as avatars. There is emerging interest in understanding how metaverse users behave and perceive activities and tasks. Our understanding of users’ behavior within metaverses is limited. This study examines the role of emotions in the movement of individuals. We therefore implement a metaverse setting using virtual reality technology and development tools. In our study, we manipulated negative emotions and tracked the movements of our participants. We show how negative emotion influences movements in a metaverse setting. Based on a literature review, we select and calculate movement features to train a support vector machine. As our result, we present a novel way to infer the negative emotions of metaverse users which will help create more engaging and immersive experiences that cater to user’s emotions and behaviors. Our study provides preliminary evidence for the potential utilization of movement data in the metaverse

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation
    corecore