65 research outputs found

    Real-time transmission of panoramic images for a telepresence wheelchair

    Full text link
    © 2015 IEEE. This paper proposes an approach to transmit panoramic images in real-time for a telepresence wheelchair. The system can provide remote monitoring and assistive assistance for people with disabilities. This study exploits technological advancement in image processing, wireless communication networks, and healthcare systems. High resolution panoramic images are extracted from the camera which is mounted on the wheelchair. The panoramic images are streamed in real-time via a wireless network. The experimental results show that streaming speed is up to 250 KBps. The subjective quality assessments show that the received images are smooth during the streaming period. In addition, in terms of the objective image quality evaluation the average peak signal-to-noise ratio of the reconstructed images is measured to be 39.19 dB which reveals high quality of images

    Real-time transmission of panoramic images for a telepresence wheelchair

    Full text link

    Real-time video streaming with multi-camera for a telepresence wheelchair

    Full text link
    © 2016 IEEE. This paper presents a new approach for telepresence wheelchairs equipped with multiple cameras. The aim of this system is to provide effective assistance for the elderly and people with disabilities. The work explores the integration of the Internet of Things, such as multimedia, wireless Internet communication, and automation control techniques into a powered wheelchair system. In particular, multiple videos are streamed in real-time from an array of cameras mounted on the wheelchair, allowing wide visualization surrounding the wheelchair. By using video communication and interaction, remote users can assist to navigate a wheelchair via the Internet through wireless connections in a distant location. The experimental results show that video streaming can achieve high-quality video with the streaming rate up to 30 frames per second (fps) in real-time. The average round-trip time is under 27 milliseconds (ms). The results confirmed the effectiveness of the proposed system for tele-monitoring and remote control to achieve safer navigation tasks for wheelchair users

    Real-time WebRTC-based design for a telepresence wheelchair

    Full text link
    © 2017 IEEE. This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    Walking away from VR as ‘empathy-machine’: peripatetic animations with 360-photogrammetry

    Get PDF
    My research partakes in an expanded documentary practice that weaves together walking, immersive technologies, and moving image. Two lines of enquiry motivate the research journey: the first responds to the trope of VR as 'empathy-machine' (Milk, 2015), often accompanied by the expression 'walking in someone else's shoes'. Within a research project that begins on foot, the idiom’s significance demands investigation. The second line of enquiry pursues a collaborative artistic practice informed by dialogue and poetry, where the bipedals of walking and the binaries of the digital are entwined by phenomenology, hauntology, performance, and the in-betweens of animation. My practice-as-research methodology involves desk study, experimentation with VR, AR, digital photogrammetry, and CGI animation. Central to my approach is the multifaceted notion of Peripatos ̶ as a school of philosophy, a stroll-like walk, and the path where the stroll takes place ̶ manifested both corporeally and as 'playful curiosity'. The thread that interweaves practice and theory has my body-moving in the centre; I call it the ‘camera-walk’: a processional shoot that documents a real place and the bodies that make it, while my hand holds high a camera-on-a-stick shooting 360-video. The resulting spherical video feeds into photogrammetric digital processing, and reassembles into digital 3D models that form the starting ground for still images, a site-specific installation, augmented reality (AR) exchanges, and short films. Because 360-video includes the body that carries the camera, the digital meshes produced by the ‘camera-walk’ also reveal the documentarian during the act of documenting. Departing from the pursuit of perfect replicas, my research articulates the iconic lineage of photogrammetry, embracing imperfections as integral. Despite the planned obsolescence of my digital instruments, I treat my 360-camera as a ‘dangerous tool’, uncovering (and inventing) its hidden virtualities, via Vilém Flusser. Against its formative intentions as an accessory for extreme sports, I focus on everyday life, and become inspired by Harun Farocki’s ‘another kind of empathy’. Within the collaborative projects presented within my thesis, I move away from the colonialist-inspired ideal of ‘walking in someone else’s shoes’, and ‘tread softly’ along the footsteps of my co-walkers

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models

    An Original Approach for a Better Remote Control of an Assistive Robot

    Get PDF
    Many researches have been done in the field of assistive robotics in the last few years. The first application field was helping with the disabled people\\u27s assistance. Different works have been performed on robotic arms in three kinds of situations. In the first case, static arm, the arm was principally dedicated to office tasks like telephone, fax... Several autonomous modes exist which need to know the precise position of objects. In the second configuration, the arm is mounted on a wheelchair. It follows the person who can employ it in more use cases. But if the person must stay in her/his bed, the arm is no more useful. In a third configuration, the arm is mounted on a separate platform. This configuration allows the largest number of use cases but also poses more difficulties for piloting the robot. The second application field of assistive robotics deals with the assistance at home of people losing their autonomy, for example a person with cognitive impairment. In this case, the assistance deals with two main points: security and cognitive stimulation. In order to ensure the safety of the person at home, different kinds of sensors can be used to detect alarming situations (falls, low cardiac pulse rate...). For assisting a distant operator in alarm detection, the idea is to give him the possibility to have complementary information from a mobile robot about the person\\u27s activity at home and to be in contact with the person. Cognitive stimulation is one of the therapeutic means used to maintain as long as possible the maximum of the cognitive capacities of the person. In this case, the robot can be used to bring to the person cognitive stimulation exercises and stimulate the person to perform them. To perform these tasks, it is very difficult to have a totally autonomous robot. In the case of disabled people assistance, it is even not the will of the persons who want to act by themselves. The idea is to develop a semi-autonomous robot that a remote operator can manually pilot with some driving assistances. This is a realistic and somehow desired solution. To achieve that, several scientific problems have to be studied. The first one is human-machine-cooperation. How a remote human operator can control a robot to perform a desired task? One of the key points is to permit the user to understand clearly the way the robot works. Our original approach is to analyse this understanding through appropriation concept introduced by Piaget in 1936. As the robot must have capacities of perceptio

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360∘^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones
    • …
    corecore