431 research outputs found
Mobile graphics: SIGGRAPH Asia 2017 course
Peer ReviewedPostprint (published version
Real-time indoor assistive localization with mobile omnidirectional vision and cloud GPU acceleration
In this paper we propose a real-time assistive localization approach to help blind and visually impaired people in navigating an indoor environment. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote image feature-based database of the scene on a GPU-enabled server. Compact and elective omnidirectional image features are extracted and represented in the smart phone front end, and then transmitted to the server in the cloud. These features of a short video clip are used to search the database of the indoor environment via image-based indexing to find the location of the current view within the database, which is associated with floor plans of the environment. A median-filter-based multi-frame aggregation strategy is used for single path modeling, and a 2D multi-frame aggregation strategy based on the candidates’ distribution densities is used for multi-path environmental modeling to provide a final location estimation. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in the database indexing process, and computation is accelerated by using multi-core CPUs and GPUs. User-friendly HCI particularly for the visually impaired is designed and implemented on an iPhone, which also supports system configurations and scene modeling for new environments. Experiments on a database of an eight-floor building are carried out to demonstrate the capacity of the proposed system, with real-time response (14 fps) and robust localization results
An Image-Space Split-Rendering Approach to Accelerate Low-Powered Virtual Reality
Virtual Reality systems provide many opportunities for scientific research
and consumer enjoyment; however, they are more demanding than traditional
desktop applications and require a wired connection to desktops in order to
enjoy maximum quality. Standalone options that are not connected to computers
exist, yet they are powered by mobile GPUs, which provide limited power in
comparison to desktop rendering. Alternative approaches to improve performance
on mobile devices use server rendering to render frames for a client and treat
the client largely as a display device. However, current streaming solutions
largely suffer from high end-to-end latency due to processing and networking
requirements, as well as underutilization of the client. We propose a networked
split-rendering approach to achieve faster end-to-end image presentation rates
on the mobile device while preserving image quality. Our proposed solution uses
an image-space division of labour between the server-side GPU and the mobile
client, and achieves a significantly faster runtime than client-only rendering
and than using a thin-client approach, which is mostly reliant on the server
Enabling Real-Time Shared Environments on Mobile Head-Mounted Displays
Head-Mounted Displays (HMDs) are becoming more prevalent consumer devices, allowing users to experience scenes and environments from a point of view naturally controlled by their movement. However there is limited application of this experiential paradigm to telecommunications -- that is, where a HMD user can 'call' a mobile phone user and begin to look around in their environment. In this thesis we present a telepresence system for connecting mobile phone users with people wearing HMDs, allowing the HMD user to experience the environment of the mobile user in real-time. We developed an Android application that supports generating and transmitting high quality spherical panorama based environments in real-time, and a companion application for HMDs to view those environments live. This thesis focusses on the technical challenges involved with creating panoramic environments of sufficient quality to be suitable for viewing inside a HMD, given the constraints that arise from using mobile phones. We present computer vision techniques optimised for these constrained conditions, justifying the trade-offs made between speed and quality. We conclude by comparing our solution to conceptually similar past research along the metrics of computation speed and output quality
- …