249 research outputs found
Adaptable videogame platform for interactive upper extremity rehabilitation
The primary objective of this work is to design a recreational rehabilitation videogame platform for customizing motivating games that interactively encourage purposeful upper extremity gross motor movements. Virtual reality (VR) technology is a popular application for rehabilitation therapies but there is a constant need for more accessible and affordable systems. We have developed a recreational VR game platform can be used as an independent therapy supplement without laboratory equipment and is inexpensive, motivating, and adaptable. The behaviors and interactive features can be easily modified and customized based on players\u27 limitations or progress.
A real-time method of capturing hand movements using programmed color detection mechanisms to create the simulated virtual environments (VEs) is implemented. Color markers are tracked and simultaneously given coordinates in the VE where the player sees representations of their hands and other interacting objects whose behaviors can be customized and adapted to fit therapeutic objectives and players\u27 interests. After gross motor task repetition and involvement in the adaptable games, mobility of the upper extremities may improve. The videogame platform is expanded and optimized to allow modifications to base inputs and algorithms for object interactions through graphical user interfaces, thus providing the adaptable need in VR rehabilitation
Acquisition and extinction across multiple virtual reality contexts: implications for specific phobias and current treatment methods
Victor Wong studied human acquisition learning over multiple contexts using virtual reality. He found that learning an association over multiple contexts can impact subsequent extinction training. This suggests that fears acquired over multiple contexts may be more difficult to treat using exposure-based therapies and will need to be augmented for effectiveness
The application of classical conditioning to the machine learning of a commonsense knowledge of visual events
In the field of artificial intelligence, possession of commonsense knowledge has long been considered to be a requirementto construct a machine that possesses
artificial general intelligence. The conventional approach to providing this commonsense knowledge is to manually encode the required knowledge, a process that is both tedious and costly. After an analysis of classical conditioning, it was deemed that constructing a system based upon the stimulusstimulus interpretation of classical conditioning could allow for commonsense knowledge to be learned through a machine directly and passively observing its environment. Based upon these principles, a system was constructed that uses a stream of events, that have been observed within the environment, to learn rules regarding what event is likely to follow after the observation of another event. The system makes use of a feedback loop between three sub-systems: one that associates events that occur together, a second that accumulates evidence
that a given association is significant and a third that recognises the significant associations. The recognition of past associations allows for both the creation of evidence for and against the existence of a particular association,
and also allows for more complex associations to be created by treating instances of strongly associated event pairs to be themselves events. Testing the abilities of the system involved simulating the three different learning environments. The results found that measures of significance based on classical conditioning generally outperformed a probability-based measure. This thesis
contributes a theory of how a stimulus-stimulus interpretation classical conditioning can be used to create commonsense knowledge and an observation that a significant sub-set of classical conditioning phenomena likely exist to aid in the elimination of noise. This thesis also represents a significant departure from existing reinforcement learning systems as the system presented in this thesis does not perform any form of action selection
Learning and reversal in the sub-cortical limbic system: a computational model
The basal ganglia are a group of nuclei that signal to and from the cerebral
cortex. They play an important role in cognition and in the initiation
and regulation of normal motor activity.
A range of characteristic motor diseases such as Parkinson's and Huntington's
have been associated with the degeneration and lesioning of the
dopaminergic neurons that target these regions.
The study of dopaminergic activity has numerous benefits from understanding how and what
effects neurodegenerative diseases have on behavior to determining
how the brain responds and adapts to rewards.
The study is also useful
in understanding what motivates
agents to select actions and do the things that they do.
The striatum is a major input structure of the
basal ganglia and is a target structure of dopaminergic neurons which originate from the
mid brain. These dopaminergic neurons release dopamine which
is known to exert modulatory influences on the striatal projections.
Action selection and
control are involved in the dorsal regions of the striatum while the dopaminergic
projections to the ventral striatum are involved in reward based learning
and motivation.
There are many computational models of the dorsolateral
striatum and the basal ganglia nuclei which have been proposed
as neural substrates for prediction, control and action selection.
However, there are relatively few models which aim to describe the role of the
ventral striatal nucleus accumbens and its core and shell sub divisions in motivation
and reward related learning.
This thesis presents a systems level computational
model of the sub-cortical nuclei of the limbic system which
focusses in particular, on the nucleus accumbens shell and core circuitry.
It is proposed that the nucleus accumbens core plays a role in enabling
reward driven motor behaviour by acquiring stimulus-response
associations which are used to invigorate responding.
The nucleus accumbens shell mediates the facilitation of highly rewarding behaviours
as well as behavioural switching.
In this model, learning is achieved by implementing
isotropic sequence order learning and a third factor (ISO-3) that
triggers learning at relevant moments. This third factor is modelled by
phasic dopaminergic activity which enables long term potentiation
to occur during the acquisition of stimulus-reward associations.
When a stimulus no longer predicts reward, tonic dopaminergic activity
is generated. This enables long term depression.
Weak depression has been simulated in the core so that stimulus-response
associations which are used to enable instrumental response
are not rapidly abolished. However, comparatively strong depression is implemented
in the shell so that information about the reward is quickly updated.
The shell influences the facilitation of highly rewarding behaviours
enabled by the core through a shell-ventral pallido-medio dorsal pathway.
This pathway functions as a feed-forward switching mechanism and enables
behavioural flexibility.
The model presented here, is capable of acquiring associations between stimuli and
rewards and simulating reversal learning.
In contrast to earlier work, the reversal is modelled by the
attenuation of the previously learned behaviour. This allows for
the reinstatement of behaviour to recur quickly as observed in
animals.
The model will be tested in both open- and closed-loop experiments
and compared against animal experiments
Agent-based Computing in Java
Agents are powerful, autonomous entities capable of performing simple, or vastly complex, operations individually or in groups of agent systems. Their capabilities extend significantly as mobile agents distributed across a network. Agent-based computing is a widely used technology with a broad range of applications, particularly in distributed computing and agent-based modeling. Many types of systems can be designed using the different architectures that define how they act, communicate, migrate, and more. This paper surveys agent-based computing, their architectures, and efforts at the standardization of certain aspects of the technology. It explores an existing framework called Jade through the lens of a demonstration based on the Sugarscape model, implemented using Jadeâs library. Finally, it presents a new framework, called NOMAD, a simple barebones framework which comprises the most essential components needed for a mobile agent framework. With it, a user can quickly and more deeply understand the vital challenges agent systems must address, such as communication and code mobility, and the solutions needed to be implemented. Theyâll be able to use the framework to extend its capabilities, create new components, and build powerful agent systems of their own
PROSPECTIVE HEAD MOVEMENT CORRECTION FOR HIGH-RESOLUTION MRI USING AN IN-BORE OPTICAL TRACKING SYSTEM
In MRI of the human brain, subject motion is a major cause of magnetic resonance image quality degradation. To compensate the effects of head motion during data acquisition, an in-bore optical motion tracking system is proposed. The system comprises one or two MR compatible infrared cameras that are fixed on a holder right above and in front of the head coil. The resulting close proximity of the cameras to the object allows precise tracking of its movement. During image acquisition, the MRI scanner uses this tracking information to prospectively compensate for head motion by adjusting gradient field direction and RF phase and frequency. Experiments performed on subjects demonstrate the system's robustness, exhibiting an accuracy of better than 0.1mm and 0.15˚
Intelligent Agent Architectures: Reactive Planning Testbed
An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected
Design revolutions: IASDR 2019 Conference Proceedings. Volume 1: Change, Voices, Open
In September 2019 Manchester School of Art at Manchester Metropolitan University was honoured to host the bi-annual conference of the International Association of Societies of Design Research (IASDR) under the unifying theme of DESIGN REVOLUTIONS. This was the first time the conference had been held in the UK. Through key research themes across nine conference tracks â Change, Learning, Living, Making, People, Technology, Thinking, Value and Voices â the conference opened up compelling, meaningful and radical dialogue of the role of design in addressing societal and organisational challenges. This Volume 1 includes papers from Change, Voices and Open tracks of the conference
Deep Learning Localization for Self-driving Cars
Identifying the location of an autonomous car with the help of visual sensors can be a good alternative to traditional approaches like Global Positioning Systems (GPS) which are often inaccurate and absent due to insufficient network coverage. Recent research in deep learning has produced excellent results in different domains leading to the proposition of this thesis which uses deep learning to solve the problem of localization in smart cars with visual data.
Deep Convolutional Neural Networks (CNNs) were used to train models on visual data corresponding to unique locations throughout a geographic location. In order to evaluate the performance of these models, multiple datasets were created from Google Street View as well as manually by driving a golf cart around the campus while collecting GPS tagged frames. The efficacy of the CNN models was also investigated across different weather/light conditions.
Validation accuracies as high as 98% were obtained from some of these models, proving that this novel method has the potential to act as an alternative or aid to traditional GPS based localization methods for cars. The root mean square (RMS) precision of Google Maps is often between 2-10m. However, the precision required for the navigation of self-driving cars is between 2-10cm. Empirically, this precision has been achieved with the help of different error-correction systems on GPS feedback. The proposed method was able to achieve an approximate localization precision of 25 cm without the help of any external error correction system
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
- âŚ