56 research outputs found

    A Utility Framework for Selecting Immersive Interactive Capability and Technology for Virtual Laboratories

    Get PDF
    There has been an increase in the use of virtual reality (VR) technology in the education community since VR is emerging as a potent educational tool that offers students with a rich source of educational material and makes learning exciting and interactive. With a rise of popularity and market expansion in VR technology in the past few years, a variety of consumer VR electronics have boosted educators and researchers’ interest in using these devices for practicing engineering and science laboratory experiments. However, little is known about how such devices may be well-suited for active learning in a laboratory environment. This research aims to address this gap by formulating a utility framework to help educators and decision-makers efficiently select a type of VR device that matches with their design and capability requirements for their virtual laboratory blueprint. Furthermore, a framework use case is demonstrated by not only surveying five types of VR devices ranging from low-immersive to full-immersive along with their capabilities (i.e., hardware specifications, cost, and availability) but also considering the interaction techniques in each VR device based on the desired laboratory task. To validate the framework, a research study is carried out to compare these five VR devices and investigate which device can provide an overall best-fit for a 3D virtual laboratory content that we implemented based on the interaction level, usability and performance effectiveness

    Tangible User Interfaces and Metaphors for 3D Navigation

    Get PDF
    The most fundamental and common 3D interaction is the control of the virtual camera or viewpoint, commonly referred to as navigation. The navigational requirements of controlling multiple degrees of freedom and maintaining adequate spatial awareness are big challenges to many users. Many tasks additionally demand large portions of cognitive effort from the user for non-navigational aspects. Therefore, new solutions that are simple and naturally efficient are in high demand. These major challenges to 3D navigation have yet to be satisfactorily addressed, and as a result, there has yet to be a declaration of a suitable unified 3D interaction technique or metaphor. We present a new domain and task independent 3D navigation metaphor, Navigational Puppetry, which we intend to be a candidate for the navigational portion of a unifying 3D interaction metaphor. The major components of the metaphor - the puppet, puppeteer, stage, and puppet-view - enable a new meta-navigational perspective and provide the user with a graspable navigational avatar, within a multiple-view perspective, that allows them to ‘reach’ within the virtual world and manipulate the viewpoint directly. We position this metaphor as a distinct articulation of the front wave of a puppetry related trend in recent 3D navigation solutions. The metaphor was implemented into a tangible user interface prototype called the Navi-Teer. Two usability studies and a unique spatial audio experiment were completed to observe and demonstrate, respectively, the metaphor’s benefits of tactile intimacy, spatial orientation, easy capture of complex input and support for collaboration

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença Ă© o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde nĂŁo estĂŁo fisicamente. Telepresença imersiva Ă© o prĂłximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior nĂșmero possĂ­vel de sentidos e utilizando novas tecnologias tais como: visĂŁo estereoscĂłpica, visĂŁo panorĂąmica, ĂĄudio 3D e Head Mounted Displays (HMDs).TelerobĂłtica Ă© um sub-campo da telepresença ligando a mesma Ă  robĂłtica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robĂŽ de forma remota. Nas soluçÔes do estado da arte da telerobĂłtica existe uma falha, uma vez que a telerobĂłtica nĂŁo tem usufruido, no geral, das recentes evoluçÔes em tecnologias de controlo e interfaces de interação pessoa- computador. AlĂ©m da falta de estudos que apostam em soluçÔes de imersividade, tais como visĂŁo estereoscĂłpica, a telerobĂłtica imersiva pode tambĂ©m incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos sĂŁo mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a mĂ©todos mais comuns encontrados na teleoperação de robĂŽs, como, por exemplo, os que se encontram em robĂŽs de busca e salvamento (SAR). O nosso principal foco Ă© testar o impacto que caracterĂ­sticas imersivas, tais como visĂŁo estereoscĂłpica e HMDs podem trazer para os robĂŽs de telepresença e sistemas de telerobĂłtica. AlĂ©m disso, e tendo em conta que este Ă© um novo e crescente campo, vamos mais alĂ©m estando tambĂ©m a desenvolver uma framework modular que possuĂ­ a capacidade de ser extendida com diferentes robĂŽs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobĂłtica Ă© possĂ­vel obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robĂŽ, por parte do operador. A perceção de profundidade e do ambiente em geral sĂŁo significativamente melhoradas quando se utiliza esta solução de imersĂŁo. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, Ă© tambĂ©m reforçado. Desenvolvemos uma plataforma modular, de baixo/mĂ©dio custo, de telerobĂłtica imersiva que pode ser estendida com aplicaçÔes Android hardware-based no lado do robĂŽ. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robĂŽ. Em adição a uma framework modular e extensĂ­vel, o projeto conta tambĂ©m com trĂȘs principais mĂłdulos de interação, nomeadamente: - MĂłdulo que contĂ©m um head mounted display com suporte a head tracking no ambiente do operador - Stream de visĂŁo estereoscĂłpica atravĂ©s de Android - E um mĂłdulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware nĂŁo apenas a ĂĄrea mĂłvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos Ășltimos anos, como tambĂ©m assistimos ao despertar de tecnologias de imersĂŁo a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluçÔes de hardware, de custo acessĂ­vel, associadas aos avanços em stream de vĂ­deo e ĂĄudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possĂ­vel. Atualmente existe uma falta de mĂ©todos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma econĂłmica e elevando a fasquia em termos de especificaçÔes.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    Reanimating cultural heritage through digital technologies

    Get PDF
    Digital technologies are becoming extremely important for web-based cultural heritage applications. This thesis presents novel digital technology solutions to 'access and interact' with digital heritage objects and collections. These innovative solutions utilize service orientation (web services), workflows, and social networking and Web 2.0 mashup technologies to innovate the creation, interpretation and use of collections dispersed in a global museumscape, where community participation is achieved through social networking. These solutions are embedded in a novel concept called Digital Library Services for Playing with Shared Heritage (DISPLAYS). DISPLAYS is concerned with creating tools and services to implement a digital library system, which allows the heritage community and museum professionals alike to create, interpret and use digital heritage content in visualization and interaction environments using web technologies based on social networking. In particular, this thesis presents a specific implementation of DISPLAYS called the Reanimating Cultural Heritage system, which is modelled on the five main functionalities or services defined in the DISPLAYS architecture, content creation, archival, exposition, presentation and interaction, for handling digital heritage objects. The main focus of this thesis is the design of the Reanimating Cultural Heritage system's social networking functionality that provides an innovative solution for integrating community access and interaction with the Sierra Leone digital heritage repository composed of collections from the British Museum, Glasgow Museums and Brighton Museum and Art Gallery. The novel use of Web 2.0 mashups in this digital heritage repository also allows the seamless integration of these museum collections to be merged with user or community generated content, while preserving the quality of museum collections data. Finally, this thesis tests and evaluates the usability of the Reanimating Cultural Heritage social networking system, in particular the suitability of the digital technology solution deployed. Testing is performed with a user group composed of several users, and the results obtained are presented

    "Enriching 360-degree technologies through human-computer interaction: psychometric validation of two memory tasks"

    Get PDF
    This doctoral dissertation explores the domain of neuropsychological assessment, with the objective of gaining a comprehensive understanding of an individual's cognitive functioning and detecting possible impairments. Traditional assessment tools, while possessing inherent value, frequently exhibit a deficiency in ecological validity when evaluating memory, as they predominantly concentrate on short-term, regulated tasks. To overcome this constraint, immersive technologies, specifically virtual reality and 360° videos, have surfaced as promising instruments for augmenting the ecological validity of cognitive assessments. This work examines the potential advantages of immersive technologies, particularly 360° videos, in enhancing memory evaluation. First, a comprehensive overview of contemporary virtual reality tools employed in the assessment of memory, as well as their convergence with conventional assessment measures has been provided. Then, the present study utilizes cluster and network analysis techniques to categorize 360° videos according to their content and applications, thereby offering significant insights into the potential of this nascent medium. The study introduces then a novel platform, Mindscape, that aims to address the existing technological disparity, thereby enhancing the accessibility of clinicians and researchers in developing cognitive tasks within immersive environments. The conclusion of the thesis encompasses the psychometric validation of two memory tasks, which have been specifically developed with Mindscape to assess episodic and spatial memory. The findings demonstrate disparities in cognitive performance between individuals diagnosed with Mild Cognitive Impairment and those without cognitive impairments, underscoring the interrelated nature of cognitive processes and the promising prospects of virtual reality technology in improving the authenticity of real-world experiences. Overall, this dissertation aims to respond to the demand for practical and ecologically valid neuropsychological assessments within the dynamic field of neuropsychology. It achieves this by integrating user-friendly platforms and immersive cognitive tasks into its methodology. By highlighting a shift in the field of neuropsychology towards prioritizing functional and practical assessments over theoretical frameworks, this work indicates a changing perspective within the discipline. This study highlights the potential of comprehensive and purpose-oriented assessment methods in cognitive evaluations, emphasizing the ongoing significance of research in fully comprehending the capabilities of immersive technologies

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments
    • 

    corecore