1,590 research outputs found

    3D-LIVE : live interactions through 3D visual environments

    Get PDF
    This paper explores Future Internet (FI) 3D-Media technologies and Internet of Things (IoT) in real and virtual environments in order to sense and experiment Real-Time interaction within live situations. The combination of FI testbeds and Living Labs (LL) would enable both researchers and users to explore capacities to enter the 3D Tele-Immersive (TI) application market and to establish new requirements for FI technology and infrastructure. It is expected that combining both FI technology pull and TI market pull would promote and accelerate the creation and adoption, by user communities such as sport practitioners, of innovative TI Services within sport events

    Network streaming and compression for mixed reality tele-immersion

    Get PDF
    Bulterman, D.C.A. [Promotor]Cesar, P.S. [Copromotor

    3D-LIVE : live interactions through 3D visual environments

    Get PDF
    This paper explores Future Internet (FI) 3D-Media technologies and Internet of Things (IoT) in real and virtual environments in order to sense and experiment Real-Time interaction within live situations. The combination of FI testbeds and Living Labs (LL) would enable both researchers and users to explore capacities to enter the 3D Tele-Immersive (TI) application market and to establish new requirements for FI technology and infrastructure. It is expected that combining both FI technology pull and TI market pull would promote and accelerate the creation and adoption, by user communities such as sport practitioners, of innovative TI Services within sport events

    Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments

    Full text link
    Despite the impressive progress of telepresence systems for room-scale scenes with static and dynamic scene entities, expanding their capabilities to scenarios with larger dynamic environments beyond a fixed size of a few square-meters remains challenging. In this paper, we aim at sharing 3D live-telepresence experiences in large-scale environments beyond room scale with both static and dynamic scene entities at practical bandwidth requirements only based on light-weight scene capture with a single moving consumer-grade RGB-D camera. To this end, we present a system which is built upon a novel hybrid volumetric scene representation in terms of the combination of a voxel-based scene representation for the static contents, that not only stores the reconstructed surface geometry but also contains information about the object semantics as well as their accumulated dynamic movement over time, and a point-cloud-based representation for dynamic scene parts, where the respective separation from static parts is achieved based on semantic and instance information extracted for the input frames. With an independent yet simultaneous streaming of both static and dynamic content, where we seamlessly integrate potentially moving but currently static scene entities in the static model until they are becoming dynamic again, as well as the fusion of static and dynamic data at the remote client, our system is able to achieve VR-based live-telepresence at close to real-time rates. Our evaluation demonstrates the potential of our novel approach in terms of visual quality, performance, and ablation studies regarding involved design choices

    3D assistive technologies and advantageous themes for collaboration and blended learning of users with disabilities

    Get PDF
    The significance of newly emergent 3D virtual worlds to different genres of users is currently a controversial subject in deliberation. Users range from education pursuers, business contenders, and social seekers to technology enhancers and many more who comprise both users with normal abilities in physical life and those with different disabilities. This study aims to derive and critically analyze, using grounded theory, advantageous and disadvantageous themes and their sub concepts of providing elearning through 3D Virtual Learning Environments (VLEs), like Second Life, to disabled users; hence providing evidence that 3DVLEs not only support traditional physical learning, but also offer e-learning opportunities unavailable through 2D VLEs (like moodle, blackboard), and offer learning opportunities unavailable through traditional physical education. Furthermore, to achieve full potential from the above mentioned derived concepts, architectural and accessibility design requirements of 3D educational facilities proposed by different categories of disabled students to accommodate for their needs, are demonstrated

    Developing enhanced conversational agents for social virtual worlds

    Get PDF
    In This Paper, We Present A Methodology For The Development Of Embodied Conversational Agents For Social Virtual Worlds. The Agents Provide Multimodal Communication With Their Users In Which Speech Interaction Is Included. Our Proposal Combines Different Techniques Related To Artificial Intelligence, Natural Language Processing, Affective Computing, And User Modeling. A Statistical Methodology Has Been Developed To Model The System Conversational Behavior, Which Is Learned From An Initial Corpus And Improved With The Knowledge Acquired From The Successive Interactions. In Addition, The Selection Of The Next System Response Is Adapted Considering Information Stored Into User&#39 S Profiles And Also The Emotional Contents Detected In The User&#39 S Utterances. Our Proposal Has Been Evaluated With The Successful Development Of An Embodied Conversational Agent Which Has Been Placed In The Second Life Social Virtual World. The Avatar Includes The Different Models And Interacts With The Users Who Inhabit The Virtual World In Order To Provide Academic Information. The Experimental Results Show That The Agent&#39 S Conversational Behavior Adapts Successfully To The Specific Characteristics Of Users Interacting In Such Environments.Work partially supported by the Spanish CICyT Projects under grant TRA2015-63708-R and TRA2016-78886-C3-1-R

    Immersive interconnected virtual and augmented reality : a 5G and IoT perspective

    Get PDF
    Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality
    • …
    corecore