2,628 research outputs found
Development of a virtual reality ophthalmoscope prototype
El examen visual es un procedimiento importante que proporciona información
acerca de la condición del fondo de ojo, permitiendo la observación e identificación
de anomalías, como ceguera, diabetes, hipertensión, sangrados resultado
de traumas, entre otros. Un apropiado examen permite identificar condiciones
que pueden comprometer la visión, sin embargo, éste es desafiante porque requiere
de una práctica extensiva para desarrollar las habilidades para una adecuada
interpretación que permiten la identificación exitosa de anomalías en el
fondo de ojo con un oftalmoscopio. Para ayudar a los practicantes a desarrollar
sus habilidades para la examinación ocular, los dispositivos de simulación médica
están ofreciendo oportunidades de entrenamiento para explorar numerosos casos
del ojo en escenarios simulados, controlados y monitoreados. Sin embargo,
los avances en la simulación del ojo han llevado a costosos simuladores con acceso
limitado ya que la práctica se mantiene con interacciones para un aprendiz
y en algunos casos, ofreciendo al entrenador la visión para la interacción del
practicante. Gracias a los costos asociados a la simulación médica, hay varias
alternativas reportadas en la revisión de la literatura, presentando aproximaciones
efectividad-costo y nivel de consumo para maximizar la efectividad del
entrenamiento para el examen de ojo. En este trabajo se presenta el desarrollo
de una aplicación con realidad aumentada inmersiva y no-inmersiva, para dispositivos
móviles Android con interacciones a través de un controlador impreso
en 3D con componentes electrónicos embebidos que imitan a un oftalmoscopio
real. La aplicación presenta a los usuarios un paciente virtual visitando al doctor
para un examen ocular, y requiere que el aprendiz ejecute el examen de fondo de
ojo haciendo diagnosticando sus hallazgos. La versión inmersiva de la aplicación
requiere del uso de un casco de realidad virtual, además del prototipo 3D de
oftalmoscopio, mientras que la no inmersiva, requiere únicamente del marcador
dentro del campo de visión del dispositivo móvil.The eye examination is an important procedure that provides information about the condition
of the eye by observing its fundus, thus allowing the observation and identification of
abnormalities, such as blindness, diabetes, hypertension, and bleeding resulting from traumas
among others. A proper eye fundus examination allows identifying conditions that may
compromise the sight; however, the eye examination is challenging because it requires extensive
practice to develop adequate interpretation skills that allows successfully identifying
abnormalities at the back of the eye seen through an ophthalmoscope. To assist trainees in
developing the eye examination skills, medical simulation devices are providing training opportunities
to explore numerous eye cases in simulated, controlled, and monitored scenarios.
However, advances in eye simulation have led to expensive simulators with limited access as
practice remain conducted on a one trainee basis in some cases offering the instructor a view
of the trainee interactions. Because of the costs associated with medical simulation, there
various alternatives reported in the literature review presenting cost-effective and consumerlevel
approaches to maximize the effectiveness of the eye examination training. In this work,
we present the development an immersive and non-immersive augmented reality application
for Android mobile devices with interactions through a 3D printed controller with embedded
electronic components that mimics a real ophthalmoscope. The application presents users
with a virtual patient visiting the doctor for an eye examination, and requires the trainees to
perform the eye fundus examination and diagnose their findings. The immersive version of
the application requires the trainees to wear a mobile VR headset and hold the 3D printed
ophthalmoscope, while the non-immersive version requires them to hold the marker within
the field of view of the mobile device.Pregrad
Development and preliminary evaluation of a novel low cost VR-based upper limb stroke rehabilitation platform using Wii technology.
Abstract Purpose: This paper proposes a novel system (using the Nintendo Wii remote) that offers customised, non-immersive, virtual reality-based, upper-limb stroke rehabilitation and reports on promising preliminary findings with stroke survivors. Method: The system novelty lies in the high accuracy of the full kinematic tracking of the upper limb movement in real-time, offering strong personal connection between the stroke survivor and a virtual character when executing therapist prescribed adjustable exercises/games. It allows the therapist to monitor patient performance and to individually calibrate the system in terms of range of movement, speed and duration. Results: The system was tested for acceptability with three stroke survivors with differing levels of disability. Participants reported an overwhelming connection with the system and avatar. A two-week, single case study with a long-term stroke survivor showed positive changes in all four outcome measures employed, with the participant reporting better wrist control and greater functional use. Activities, which were deemed too challenging or too easy were associated with lower scores of enjoyment/motivation, highlighting the need for activities to be individually calibrated. Conclusions: Given the preliminary findings, it would be beneficial to extend the case study in terms of duration and participants and to conduct an acceptability and feasibility study with community dwelling survivors. Implications for Rehabilitation Low-cost, off-the-shelf game sensors, such as the Nintendo Wii remote, are acceptable by stroke survivors as an add-on to upper limb stroke rehabilitation but have to be bespoked to provide high-fidelity and real-time kinematic tracking of the arm movement. Providing therapists with real-time and remote monitoring of the quality of the movement and not just the amount of practice, is imperative and most critical for getting a better understanding of each patient and administering the right amount and type of exercise. The ability to translate therapeutic arm movement into individually calibrated exercises and games, allows accommodation of the wide range of movement difficulties seen after stroke and the ability to adjust these activities (in terms of speed, range of movement and duration) will aid motivation and adherence - key issues in rehabilitation. With increasing pressures on resources and the move to more community-based rehabilitation, the proposed system has the potential for promoting the intensity of practice necessary for recovery in both community and acute settings.The National Health Service (NHS) London Regional Innovation Fund
The selection and evaluation of a sensory technology for interaction in a warehouse environment
In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed
Recommended from our members
The development and applications of serious games in the public services: defence and health
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.The latest advances of Virtual Reality technologies and three-dimensional graphics, as well as the developments in Gaming Technologies in the recent years, have stemmed the proliferation of Serious Games in a broader spectrum of research applications. Among the most popular areas of application are public services such as Defence and Health, where digital technologies realise new challenges and opportunities for research and development of Serious Games and for a variety of contexts. As with all games, the user engagement is elevated and apart from the entertaining aspect, Serious Games serve as a novel and promising alternative experience to knowledge transfer. Furthermore, Serious Games bring to the end user and the overall society a series of attractive benefits. These benefits include safety, cost-effectiveness, increased motivation and personalisation. Hence, this Thesis aims to investigate novel approaches of developing Serious Games that utilise the recent advances of Virtual Reality and Gaming Technology and facilitate the aforementioned benefits. The process of design and development of the novel tools and applications follow an iterative manner and are driven by the review of the available literature as well as end-user feedbackEPSRC (Engineering and Physical Sciences Research Council ) , MOD (UK), NHS (UK
Online Personal Computer Assembling Troubleshooting (P-CAT.com)
This report describes the research efforts that have been carried out for the Final
Year Project. P-CAT.com developed to improve the current way of searching the
solution to the PC assembling over the internet. Hence, to educate the PC users about
the computer component and solve the PC problem own their own. For the basis of
implementation, a literature review is determined the issue involved, how to
overcome them and designing graphical user interface based on the previous study
on human computer interaction. In the literature review included the lesson learnt
from the existing almost similar project. Beside the research have been conducted to
determined user requirement and needs, the result have been summarize to determine
next step to conduct. This report also explains the planning phase which includes the
system architecture design, network architecture and design document. Next, the
report describes the strategies and tools used to develop P-CAT.com. Finally, the
purpose of gathering data together with its conclusions, recommendations and future
enhancement are briefly described in this report. While, the next steps involved in
developing P-CAT.com to meet its goals as initially proposed
Game design for a serious game to help learn programming
Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201
An architecture supporting the development of serious games for scenario-based training and its application to Advanced Life Support
The effectiveness of serious games for training has already been proved in several domains. A major obstacle to the mass adoption of serious games comes from the difficulties in their development, due to the lack of widely adopted architectures that could streamline their creation process. In this thesis we present an architecture supporting the development of serious games for scenario-based training, a serious games for medical training we developed exploiting the architecture and the results of a study about its effectivenes
- …