311 research outputs found
Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications
Predictive contract mechanisms such as dead reckoning are widely employed to support scalable
remote entity modeling in distributed interactive applications (DIAs). By employing a form of
controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the
distribution of instantaneous derivative information, dead reckoning trades remote extrapolation
accuracy for low computational complexity and ease-of-implementation. In this article, we present
a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of
instantaneous velocity information with predictive velocity information in order to improve the
accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning
approach, each controlling host employs a bank of neural network predictors trained to estimate
future changes in entity velocity up to and including some maximum prediction horizon. The effect
of each estimated change in velocity on the current entity position is simulated to produce an
estimate for the likely position of the entity over some short time-span. Upon detecting an error
threshold violation, the controlling host transmits a predictive velocity vector that extrapolates
through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such
an approach succeeds in reducing the spatial error associated with remote extrapolation of entity
state. Consequently, a further reduction in network traffic can be achieved. Simulation results
conducted using several human users in a highly interactive DIA indicate significant potential
for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our
proposed neuro-reckoning framework exhibits low computational resource overhead for real-time
use and can be seamlessly integrated into many existing dead reckoning mechanisms
Towards Enhanced Biofeedback Mechanisms for Upper Limb Rehabilitation in Stroke
This paper highlights a progressive
rehabilitation strategy which details the development
of a suite of biomedical feedback sensors to promote
enhanced rehabilitation after stroke. The strategy
involves promoting total upper limb recovery by
focusing on aspects of each stage of post-stroke
rehabilitation. For a patient with a complete absence
of movement in the affected upper limb, brain
signals will be acquired using
ear-Infrared
Spectroscopy (IRS) combined with motor imagery
to move a robotic splint. Once residual movement
has returned, EMG signals from the muscles will be
detected and used to power a robotic splint. For later
stages and continuous enhanced rehabilitation of the
upper limb, a Sensor Glove will be used for intense
rehabilitation exercises of the hand. These combined
techniques cover all levels of ability for total upper
limb rehabilitation and will be used to provide
positive feedback and motivation for patients
On Consistency and Network Latency in Distributed Interactive Applications: A Survey—Part I
This paper is the first part of a two-part paper that documents a detailed survey
of the research carried out on consistency and latency in distributed interactive applications
(DIAs) in recent decades. Part I reviews the terminology associated with DIAs and offers
definitions for consistency and latency. Related issues such as jitter and fidelity are also
discussed. Furthermore, the various consistency maintenance mechanisms that researchers
have used to improve consistency and reduce latency effects are considered. These
mechanisms are grouped into one of three categories, namely time management,
Information management and system architectural management. This paper presents the
techniques associated with the time management category. Examples of such mechanisms
include time warp, lock step synchronisation and predictive time management. The
remaining two categories are presented in part two of the survey
An Information-Based Dynamic Extrapolation Model for Networked Virtual Environments
Various Information Management techniques have been developed to help maintain a consistent shared virtual world in a
Networked Virtual Environment. However, such techniques have to be carefully adapted to the application state dynamics and
the underlying network. This work presents a novel framework that minimizes inconsistency by optimizing bandwidth usage to
deliver useful information. This framework measures the state evolution using an information model and dynamically switches
extrapolation models and the packet rate to make the most information-efficient usage of the available bandwidth. The results
shown demonstrate that this approach can help optimize consistency under constrained and time-varying network conditions
A Novel Convergence Algorithm for the Hybrid Strategy Model Packet Reduction Technique
Several approaches exist for maintaining consistency in Distributed Interactive
Applications. Among these are techniques such as dead reckoning which use prediction
algorithms to approximate actual user behaviour and thus reduce the number of update
packets required to maintain spatial consistency. The Hybrid Strategy Model operates in a
similar way, exploiting long-term patterns in user behaviour whenever possible. Otherwise
it simply adopts a short-term model. A major problem with these techniques is the
reconstruction of the local behaviour at a remote node. Using the modelled dynamics
directly can result in unnatural and sudden jumps in position where updates occur.
Convergence algorithms are thus required to smoothly reconstruct remote behaviour from
discontinuous samples of the actual local behaviour. This paper makes two important
contributions. Primarily, it proposes a novel convergence approach for the Hybrid Strategy
Model. Secondly, and more fundamentally, it exposes a lack of suitable and quantifiable
measures of different convergence techniques. In this paper the standard smoothing
algorithm employed by DIS is used as a benchmark for comparison purposes
Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications
Predictive contract mechanisms such as dead reckoning are widely employed to support scalable
remote entity modeling in distributed interactive applications (DIAs). By employing a form of
controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the
distribution of instantaneous derivative information, dead reckoning trades remote extrapolation
accuracy for low computational complexity and ease-of-implementation. In this article, we present
a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of
instantaneous velocity information with predictive velocity information in order to improve the
accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning
approach, each controlling host employs a bank of neural network predictors trained to estimate
future changes in entity velocity up to and including some maximum prediction horizon. The effect
of each estimated change in velocity on the current entity position is simulated to produce an
estimate for the likely position of the entity over some short time-span. Upon detecting an error
threshold violation, the controlling host transmits a predictive velocity vector that extrapolates
through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such
an approach succeeds in reducing the spatial error associated with remote extrapolation of entity
state. Consequently, a further reduction in network traffic can be achieved. Simulation results
conducted using several human users in a highly interactive DIA indicate significant potential
for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our
proposed neuro-reckoning framework exhibits low computational resource overhead for real-time
use and can be seamlessly integrated into many existing dead reckoning mechanisms
On network latency in distributed interactive applications
This paper has three objectives. Firstly it describes the historical development of Distributed Interactive Applications. It then defines network latency. Finally it describes a new approach to masking network latency in Distributed Interactive Applications called the strategy model approach. This approach derives from the on-going PhD studies of one of the authors. A software application to gather strategy data from users is described in detail and an example of deriving a user strategy is given
A Novel Convergence Algorithm for the Hybrid Strategy Model Packet Reduction Technique
Several approaches exist for maintaining consistency in Distributed Interactive
Applications. Among these are techniques such as dead reckoning which use prediction
algorithms to approximate actual user behaviour and thus reduce the number of update
packets required to maintain spatial consistency. The Hybrid Strategy Model operates in a
similar way, exploiting long-term patterns in user behaviour whenever possible. Otherwise
it simply adopts a short-term model. A major problem with these techniques is the
reconstruction of the local behaviour at a remote node. Using the modelled dynamics
directly can result in unnatural and sudden jumps in position where updates occur.
Convergence algorithms are thus required to smoothly reconstruct remote behaviour from
discontinuous samples of the actual local behaviour. This paper makes two important
contributions. Primarily, it proposes a novel convergence approach for the Hybrid Strategy
Model. Secondly, and more fundamentally, it exposes a lack of suitable and quantifiable
measures of different convergence techniques. In this paper the standard smoothing
algorithm employed by DIS is used as a benchmark for comparison purposes
Harnessing brain power at NUI Maynooth
The Department of Electronic Engineering at NUI Maynooth is involved in exciting interdisciplinary
work in the biomedical, digital signal processing, control and electronic systems areas. Here Tomas
Ward, Seán McLoone and Shirley Coyle highlight three specific projects
A Physics-Aware Dead Reckoning Technique for Entity State Updates in Distributed Interactive Applications
This paper proposes a novel entity state update technique for physics-rich environments
in peer-to-peer Distributed Interactive Applications. The proposed technique consists of a dynamic
authority scheme for shared objects and a physics-aware dead reckoning model with an adaptive error
threshold. The former is employed to place a bound on the overall inconsistency present in shared
objects, while the latter is implemented to minimise the instantaneous inconsistency during users’
interactions with shared objects. The performance of the proposed entity state update mechanism is
validated using a simulated application
- …