873 research outputs found

    Real-time transmission of panoramic images for a telepresence wheelchair

    Full text link
    © 2015 IEEE. This paper proposes an approach to transmit panoramic images in real-time for a telepresence wheelchair. The system can provide remote monitoring and assistive assistance for people with disabilities. This study exploits technological advancement in image processing, wireless communication networks, and healthcare systems. High resolution panoramic images are extracted from the camera which is mounted on the wheelchair. The panoramic images are streamed in real-time via a wireless network. The experimental results show that streaming speed is up to 250 KBps. The subjective quality assessments show that the received images are smooth during the streaming period. In addition, in terms of the objective image quality evaluation the average peak signal-to-noise ratio of the reconstructed images is measured to be 39.19 dB which reveals high quality of images

    Do I smell coffee? The tale of a 360Âş Mulsemedia experience

    Get PDF
    One of the main challenges in current multimedia networking environments is to find solutions to help accommodate the next generation of mobile application classes with stringent Quality of Service (QoS) requirements whilst enabling Quality of Experience (QoE) provisioning for users. One such application class, featured in this paper, is 360º mulsemedia—multiple sensorial media—which enriches 360º video by adding sensory effects that stimulate human senses beyond those of sight and hearing, such as the tactile and olfactory ones. In this paper, we present a conceptual framework for 360º mulsemedia delivery and a 360º mulsemedia-based prototype that enables users to experience 360º mulsemedia content. User evaluations revealed that higher video resolutions do not necessarily lead to the highest QoE levels in our experimental setup. Therefore, bandwidth savings can be leveraged with no detrimental impact on QoE

    An Image-Space Split-Rendering Approach to Accelerate Low-Powered Virtual Reality

    Full text link
    Virtual Reality systems provide many opportunities for scientific research and consumer enjoyment; however, they are more demanding than traditional desktop applications and require a wired connection to desktops in order to enjoy maximum quality. Standalone options that are not connected to computers exist, yet they are powered by mobile GPUs, which provide limited power in comparison to desktop rendering. Alternative approaches to improve performance on mobile devices use server rendering to render frames for a client and treat the client largely as a display device. However, current streaming solutions largely suffer from high end-to-end latency due to processing and networking requirements, as well as underutilization of the client. We propose a networked split-rendering approach to achieve faster end-to-end image presentation rates on the mobile device while preserving image quality. Our proposed solution uses an image-space division of labour between the server-side GPU and the mobile client, and achieves a significantly faster runtime than client-only rendering and than using a thin-client approach, which is mostly reliant on the server

    Multisensory 360 videos under varying resolution levels enhance presence

    Get PDF
    Omnidirectional videos have become a leading multimedia format for Virtual Reality applications. While live 360â—¦ videos offer a unique immersive experience, streaming of omnidirectional content at high resolutions is not always feasible in bandwidth-limited networks. While in the case of flat videos, scaling to lower resolutions works well, 360â—¦ video quality is seriously degraded because of the viewing distances involved in head-mounted displays. Hence, in this paper, we investigate first how quality degradation impacts the sense of presence in immersive Virtual Reality applications. Then, we are pushing the boundaries of 360â—¦ technology through the enhancement with multisensory stimuli. 48 participants experimented both 360â—¦ scenarios (with and without multisensory content), while they were divided randomly between four conditions characterised by different encoding qualities (HD, FullHD, 2.5K, 4K). The results showed that presence is not mediated by streaming at a higher bitrate. The trend we identified revealed however that presence is positively and significantly impacted by the enhancement with multisensory content. This shows that multisensory technology is crucial in creating more immersive experiences

    Towards a Theoretical Framework of Acceptance of Virtual Reality Technology: Evidence from 360-Video Concert

    Get PDF
    We examine the use of 360-degree video technology in a live music event with the aim to explore the factors leading to acceptance of the VR use case and technology, to reduce the knowledge gap about this topic. We collected self-reported, quantitative data from 23 participants and investigated the user experience during the VR mediated 360-video concert and the acceptance of the 360-video for concert participation and VR technology use. We found that acceptance of the novel VR-based communication approach was correlated mainly with perceived usefulness. Furthermore, the perceived usefulness was only correlated with fun, but not flow and immersion. We outline the results in a new theoretical framework for studying and predicting the relationships between individual characteristics, user experience, VR evaluation, content and device, and the acceptance of 360-video mediated musical events and VR technology. Implications for VR acceptance theory and design practice are discussed

    Real-time transmission of panoramic images for a telepresence wheelchair

    Full text link

    Visual authoring of virtual reality conversational scenarios for e‑learning

    Get PDF
    The COVID-19 pandemic has led to face-to-face activities being developed in a virtual format that often offers a poor experience in areas such as education. Virtual Learning Environments have improved in recent years thanks to new technologies such as Virtual Reality or Chatbots. However, creating Virtual Learning Environments requires advanced programming knowledge, so this work is aimed to enable teachers to create these new environments easily. This work presents a set of extensions for App Inventor that facilitate the authoring of mobile learning apps that use Chatbots in a Virtual Reality environment, while simultaneously monitoring of student activity. This proposal is based on integrating block-based languages and Business Process Model and Notation diagrams. The developed extensions were successfully implemented in an educational app called Let’s date!. A quantitative analysis of the use of these extensions in App Inventor was also carried out, resulting in a significant reduction in the number of blocks required. The proposed contribution has demonstrated its validity in creating virtual learning environments through visual programming and modelling, reducing development complexity
    • …
    corecore