4,729 research outputs found
Echo State Transfer Learning for Data Correlation Aware Resource Allocation in Wireless Virtual Reality
In this paper, the problem of data correlation-aware resource management is
studied for a network of wireless virtual reality (VR) users communicating over
cloud-based small cell networks (SCNs). In the studied model, small base
stations (SBSs) with limited computational resources act as VR control centers
that collect the tracking information from VR users over the cellular uplink
and send them to the VR users over the downlink. In such a setting, VR users
may send or request correlated or similar data (panoramic images and tracking
data). This potential spatial data correlation can be factored into the
resource allocation problem to reduce the traffic load in both uplink and
downlink. This VR resource allocation problem is formulated as a noncooperative
game that allows jointly optimizing the computational and spectrum resources,
while being cognizant of the data correlation. To solve this game, a transfer
learning algorithm based on the machine learning framework of echo state
networks (ESNs) is proposed. Unlike conventional reinforcement learning
algorithms that must be executed each time the environment changes, the
proposed algorithm can intelligently transfer information on the learned
utility, across time, to rapidly adapt to environmental dynamics due to factors
such as changes in the users' content or data correlation. Simulation results
show that the proposed algorithm achieves up to 16.7% and 18.2% gains in terms
of delay compared to the Q-learning with data correlation and Q-learning
without data correlation. The results also show that the proposed algorithm has
a faster convergence time than Q-learning and can guarantee low delays.Comment: This paper has been accepted by Asiloma
Multimodal, Embodied and Location-Aware Interaction
This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of
gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case.
BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction.
GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality
of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon-
strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for
highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their
virtual environment
Multimodal, Embodied and Location-Aware Interaction
This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of
gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case.
BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction.
GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality
of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon-
strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for
highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their
virtual environment
Indoor Localization Solutions for a Marine Industry Augmented Reality Tool
In this report are described means for indoor localization in special, challenging circum-stances in marine industry. The work has been carried out in MARIN project, where a tool based on mobile augmented reality technologies for marine industry is developed. The tool can be used for various inspection and documentation tasks and it is aimed for improving the efficiency in design and construction work by offering the possibility to visualize the newest 3D-CAD model in real environment. Indoor localization is needed to support the system in initialization of the accurate camera pose calculation and auto-matically finding the right location in the 3D-CAD model. The suitability of each indoor localization method to the specific environment and circumstances is evaluated.Siirretty Doriast
Map matching by using inertial sensors: literature review
This literature review aims to clarify what is known about map matching by
using inertial sensors and what are the requirements for map matching, inertial
sensors, placement and possible complementary position technology. The target
is to develop a wearable location system that can position itself within a complex
construction environment automatically with the aid of an accurate building model.
The wearable location system should work on a tablet computer which is running
an augmented reality (AR) solution and is capable of track and visualize 3D-CAD
models in real environment. The wearable location system is needed to support the
system in initialization of the accurate camera pose calculation and automatically
finding the right location in the 3D-CAD model. One type of sensor which does seem
applicable to people tracking is inertial measurement unit (IMU). The IMU sensors
in aerospace applications, based on laser based gyroscopes, are big but provide a
very accurate position estimation with a limited drift. Small and light units such
as those based on Micro-Electro-Mechanical (MEMS) sensors are becoming very
popular, but they have a significant bias and therefore suffer from large drifts and
require method for calibration like map matching. The system requires very little
fixed infrastructure, the monetary cost is proportional to the number of users, rather
than to the coverage area as is the case for traditional absolute indoor location
systems.Siirretty Doriast
Requirement analysis and sensor specifications – First version
In this first version of the deliverable, we make the following contributions: to design the
WEKIT capturing platform and the associated experience capturing API, we use a
methodology for system engineering that is relevant for different domains such as: aviation,
space, and medical and different professions such as: technicians, astronauts, and medical
staff. Furthermore, in the methodology, we explore the system engineering process and how
it can be used in the project to support the different work packages and more importantly
the different deliverables that will follow the current.
Next, we provide a mapping of high level functions or tasks (associated with experience
transfer from expert to trainee) to low level functions such as: gaze, voice, video, body
posture, hand gestures, bio-signals, fatigue levels, and location of the user in the
environment. In addition, we link the low level functions to their associated sensors.
Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their
technical specifications, possible limitations, standards, and platforms.
We outline a set of recommendations pertaining to the sensors that are most relevant for
the WEKIT project taking into consideration the environmental, technical and human
factors described in other deliverables. We recommend Microsoft Hololens (for Augmented
reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift
(for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand
gesture tracking). For eye tracking, an existing eye-tracking system can be customised to
complement the augmented reality glasses, and built-in microphone of the augmented
reality glasses can capture the expert’s voice. We propose a modular approach for the design
of the WEKIT experience capturing system, and recommend that the capturing system
should have sufficient storage or transmission capabilities.
Finally, we highlight common issues associated with the use of different sensors. We
consider that the set of recommendations can be useful for the design and integration of the
WEKIT capturing platform and the WEKIT experience capturing API to expedite the time
required to select the combination of sensors which will be used in the first prototype.WEKI
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
- …