178 research outputs found

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Examining the role of smart TVs and VR HMDs in synchronous at-a-distance media consumption

    Get PDF
    This article examines synchronous at-a-distance media consumption from two perspectives: How it can be facilitated using existing consumer displays (through TVs combined with smartphones), and imminently available consumer displays (through virtual reality (VR) HMDs combined with RGBD sensing). First, we discuss results from an initial evaluation of a synchronous shared at-a-distance smart TV system, CastAway. Through week-long in-home deployments with five couples, we gain formative insights into the adoption and usage of at-a-distance media consumption and how couples communicated during said consumption. We then examine how the imminent availability and potential adoption of consumer VR HMDs could affect preferences toward how synchronous at-a-distance media consumption is conducted, in a laboratory study of 12 pairs, by enhancing media immersion and supporting embodied telepresence for communication. Finally, we discuss the implications these studies have for the near-future of consumer synchronous at-a-distance media consumption. When combined, these studies begin to explore a design space regarding the varying ways in which at-a-distance media consumption can be supported and experienced (through music, TV content, augmenting existing TV content for immersion, and immersive VR content), what factors might influence usage and adoption and the implications for supporting communication and telepresence during media consumption

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Diseño de un robot móvil autónomo de telepresencia

    Get PDF
    The recent rise in tele-operated autonomous mobile vehicles calls for a seamless control architecture that reduces the learning curve when the platform is functioning autonomously (without active supervisory control), as well as when tele-operated. Conventional robot plat-forms usually solve one of two problems. This work develops a mobile base using the Robot Operating System (ROS) middleware for teleoperation at low cost. The three-layer architec-ture introduced adds or removes operator complexity. The lowest layer provides mobility and robot awareness; the second layer provides usability; the upper layer provides inter-activity. A novel interactive control that combines operator intelligence/ skill with robot/autonomous intelligence enabling the mobile base to respond to expected events and ac-tively react to unexpected events is presented. The experiments conducted in the robot laboratory summarises the advantages of using such a system.El reciente auge de los vehículos móviles autónomos teleoperados exige una arquitectura de control sin fisuras que reduzca la curva de aprendizaje cuando la plataforma funciona de forma autónoma (sin control de supervisión activo), así como cuando es teleoperada. Las plataformas robóticas convencionales suelen resolver uno de los dos problemas. Este tra-bajo desarrolla una base móvil que utiliza el middleware Robot Operating System (ROS) para la teleoperación a bajo coste. La arquitectura de tres capas introducida añade o elimina la complejidad del operador. La capa más baja proporciona movilidad y conciencia robótica; la segunda capa proporciona usabilidad; la capa superior proporciona interactividad. Se presenta un novedoso control interactivo que combina la inteligencia/habilidades del op-erador con la inteligencia autónoma del robot, lo que permite que la base móvil responda a los eventos esperados y reaccione activamente a los eventos inesperados. Los experi-mentos realizados en el laboratorio robótica resumen las ventajas de utilizar un sistema de este tipoDepartamento de Ingeniería de Sistemas y AutomáticaMáster en Electrónica Industrial y Automátic

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones

    Telepresence learning environments for opera singing, a case study

    Get PDF
    The present study analyzes the data obtained in the execution of the Opera eLearning project, a multidisciplinary effort to develop a solution for Opera singing distance lessons at the graduate level, using high bandwidth to deliver quality audio and video experience that has been evaluated by singing teachers, chorus and orchestra directors, singers and other professional musicians. The research work includes the phases of design, execution and evaluation of pilot tests, followed by further development and execution of several experimental exercises with the system, all of them carried out between July 2008 and April 2009. This is an empirical research, an exploratory case study that has provided enough data to arrive to a sustainable model for a telepresence learning environment. Different usability methods have been implemented in order to assure users of the quality of the product. The main objective is to prove whether the system or artifact proposed can be used to deliver a complete remote singing class at a higher education level; for that purpose, we have defined several research categories that describe the usability of the system in multiple dimensions. We have used “design as research” approaches to promote innovation in the technological area. The theoretical framework is based on a wide variety of fields; from acoustics, physics, music, professional singing to telecommunications and multimedia technology. However, the common thread and central issue under analysis is distance education, through the construction of a remote learning system. We have also included the corresponding justification of the scientific methodology employedEl presente estudio analiza los datos obtenidos en la ejecución del proyecto Opera eLearning, un esfuerzo multidisciplinario para desarrollar una solución que permita dar clases a distancia de canto lírico a nivel de educación superior, utilizando conexiones de banda ancha con el fin de proveer una experiencia de vídeo y audio de calidad, la que ha sido evaluada por profesores de canto, directores de coros y orquesta, cantantes y otros músicos profesionales. El trabajo de investigación incluye las fases de diseño, ejecución y evaluación de las pruebas piloto, seguido del posterior desarrollo y ejecución de varios ejercicios experimentales con el sistema, todos ellos efectuados entre Julio de 2008 y Abril de 2009. Esta es una investigación empírica, un caso de estudio exploratorio que ha obtenido datos suficientes como para definir un modelo sostenible de entorno de enseñanza por telepresencia. Diversos métodos de usabilidad fueron implementados con el fin de asegurar a los usuarios la calidad del producto. El objetivo principal es probar si el sistema o artefacto propuesto puede ser usado para realizar de modo remoto una clase completa de canto lírico a nivel de educación superior; con tal propósito, hemos definido varias categorías de investigación que describen la usabilidad del sistema en múltiples dimensiones. Hemos utilizado el enfoque de “diseño como investigación” para promover la innovación en el área tecnológica. El marco teórico se basa en una amplia variedad de campos; desde la acústica, la física, la música, el canto profesional hasta las telecomunicaciones y tecnología multimedia. Sin embargo, el hilo común y tema central bajo análisis es la educación a distancia, ya que se trata de la construcción de un sistema de aprendizaje remoto. También se he incluido la justificación correspondiente a la metodología científica empleada
    corecore