793 research outputs found

    Reinventing a teleconferencing system

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2001.Includes bibliographical references (p. 67-71).In looking forward to more natural we can anticipate that the teleconferencing system of the future will enable participants at distant locations to share the same virtual space. The visual object of each participant can be transmitted to the other sites and be rendered from an individual perspective. This thesis presents an effort, X-Conference, to reinvent a teleconferencing system toward the concept of "3-D Virtual Teleconferencing." Several aspects are explored. A multiple-camera calibration approach is implemented and is employed to effectively blend the real view and the virtual view. An individualized 3-D head object is built semi-automatically by mapping the real texture to the globally modified generic model. Head motion parameters are extracted from tracking artificial and/or facial features. Without using the articulation model, facial animation is partially achieved by using texture displacement. UDP/IP multicast and TCP/IP unicast are both utilized to implement the networking scheme.by Xin Wang.S.M

    Mutual Gaze Support in Videoconferencing Reviewed

    Get PDF
    Videoconferencing allows geographically dispersed parties to communicate by simultaneous audio and video transmissions. It is used in a variety of application scenarios with a wide range of coordination needs and efforts, such as private chat, discussion meetings, and negotiation tasks. In particular, in scenarios requiring certain levels of trust and judgement non-verbal communication, cues are highly important for effective communication. Mutual gaze support plays a central role in those high coordination need scenarios but generally lacks adequate technical support from videoconferencing systems. In this paper, we review technical concepts and implementations for mutual gaze support in videoconferencing, classify them, evaluate them according to a defined set of criteria, and give recommendations for future developments. Our review gives decision makers, researchers, and developers a tool to systematically apply and further develop videoconferencing systems in serious settings requiring mutual gaze. This should lead to well-informed decisions regarding the use and development of this technology and to a more widespread exploitation of the benefits of videoconferencing in general. For example, if videoconferencing systems supported high-quality mutual gaze in an easy-to-set-up and easy-to-use way, we could hold more effective and efficient recruitment interviews, court hearings, or contract negotiations

    Advances in Spatially Faithful (3D) Telepresence

    Get PDF
    Benefits of AR technologies have been well proven in collaborative industrial applications, for example in remote maintenance and consultancy. Benefits may also be high in telepresence applications, where virtual and mixed reality (nowadays often referred as extended reality, XR) technologies are used for sharing information or objects over network. Since the 90’s, technical enablers for advanced telepresence solutions have developed considerably. At the same time, the importance of remote technologies has grown immensely due to general disruption of work, demands for reducing travelling and CO2, and the need for preventing pandemics. An advanced 3D telepresence solution benefits from using XR technologies. Particularly interesting are solutions based on HMD or glasses type of near-eye-displays (NED). However, as AR/VR glasses supporting natural occlusions and accommodation are still missing from the market, a good alternative is to use screen displays in new ways, better supporting e.g. virtual meeting geometries and other important cues for 3D perception. In this article, researchers Seppo Valli, Mika Hakkarainen, and Pekka Siltanen from VTT Technical Research Centre of Finland describe the status, challenges, and opportunities in both glasses and screen based 3D telepresence. The writers also specify an affordable screen based solution with improved immersiveness, naturalness, and efficiency, enhanced by applying XR technologies

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments

    Virtual acoustics displays

    Get PDF
    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events

    Sound at the user interface

    Get PDF

    Teaching and Learning Immersion and Presence

    Get PDF

    Sound at the user interface

    Get PDF

    Eyes Alive

    Get PDF
    For an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have implemented an eye movement model based on empirical models of saccades and statistical models of eye-tracking data. Face animations using stationary eyes, eyes with random saccades only, and eyes with statistically derived saccades are compared, to evaluate whether they appear natural and effective while communicating
    • …
    corecore