17,697 research outputs found
PerfVis: Pervasive Visualization in Immersive AugmentedReality for Performance Awareness
Developers are usually unaware of the impact of code changes to the
performance of software systems. Although developers can analyze the
performance of a system by executing, for instance, a performance test to
compare the performance of two consecutive versions of the system, changing
from a programming task to a testing task would disrupt the development flow.
In this paper, we propose the use of a city visualization that dynamically
provides developers with a pervasive view of the continuous performance of a
system. We use an immersive augmented reality device (Microsoft HoloLens) to
display our visualization and extend the integrated development environment on
a computer screen to use the physical space. We report on technical details of
the design and implementation of our visualization tool, and discuss early
feedback that we collected of its usability. Our investigation explores a new
visual metaphor to support the exploration and analysis of possibly very large
and multidimensional performance data. Our initial result indicates that the
city metaphor can be adequate to analyze dynamic performance data on a large
and non-trivial software system.Comment: ICPE'19 vision, 4 pages, 2 figure, conferenc
Recommended from our members
Acoustic motion tracking and its application
Video games, Virtual Reality (VR), Augmented Reality (AR), and Smart appliances (e.g., smart TVs) all call for a new way for users to interact and control them. This thesis explores high precision acoustic motion tracking system which aims to replace traditional control devices such as mouse and let the user play games, interact with VR/AR headsets, and control smart appliances. We develop a lightweight system which can achieve mm-level tracking accuracy using inaudible sounds. At the heart of our system lies a distributed Frequency Modulated Continuous Waveform (FMCW) which is able to accurately estimate the absolute distance between a receiver and a transmitter that are separate and unsynchronized. We further develop an optimization framework to combine FMCW estimation with Doppler shifts and Inertial Measurement Unit (IMU) measurements to enhance the accuracy, and efficient algorithm to solve the optimization problem. We develop several interesting applications on top of our motion tracking technology, including audio ruler, drawing in the air, and playing motion-controlled gamesComputer Science
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens
rendering, which turns only a fraction of the user's real environment into AR
while the rest of the environment remains unaffected. Since handheld AR devices
are commonly equipped with video see-through capabilities, AR magic lens
applications often suffer from spatial distortions, because the AR environment
is presented from the perspective of the camera of the mobile device. Recent
approaches counteract this distortion based on estimations of the user's head
position, rendering the scene from the user's perspective. To this end,
approaches usually apply face-tracking algorithms on the front camera of the
mobile device. However, this demands high computational resources and therefore
commonly affects the performance of the application beyond the already high
computational load of AR applications. In this paper, we present a method to
reduce the computational demands for user perspective rendering by applying
lightweight optical flow tracking and an estimation of the user's motion before
head tracking is started. We demonstrate the suitability of our approach for
computationally limited mobile devices and we compare it to device perspective
rendering, to head tracked user perspective rendering, as well as to fixed
point of view user perspective rendering
Adding generic contextual capabilities to wearable computers
Context-awareness has an increasingly important role to play in the development of wearable computing systems. In order to better define this role we have identified four generic contextual capabilities: sensing, adaptation, resource discovery, and augmentation. A prototype application has been constructed to explore how some of these capabilities could be deployed in a wearable system designed to aid an ecologist's observations of giraffe in a Kenyan game reserve. However, despite the benefits of context-awareness demonstrated in this prototype, widespread innovation of these capabilities is currently stifled by the difficulty in obtaining the contextual data. To remedy this situation the Contextual Information Service (CIS) is introduced. Installed on the user's wearable computer, the CIS provides a common point of access for clients to obtain, manipulate and model contextual information independently of the underlying plethora of data formats and sensor interface mechanisms
- …