3 research outputs found
Development of a virtual reality ophthalmoscope prototype
El examen visual es un procedimiento importante que proporciona información
acerca de la condición del fondo de ojo, permitiendo la observación e identificación
de anomalías, como ceguera, diabetes, hipertensión, sangrados resultado
de traumas, entre otros. Un apropiado examen permite identificar condiciones
que pueden comprometer la visión, sin embargo, éste es desafiante porque requiere
de una práctica extensiva para desarrollar las habilidades para una adecuada
interpretación que permiten la identificación exitosa de anomalías en el
fondo de ojo con un oftalmoscopio. Para ayudar a los practicantes a desarrollar
sus habilidades para la examinación ocular, los dispositivos de simulación médica
están ofreciendo oportunidades de entrenamiento para explorar numerosos casos
del ojo en escenarios simulados, controlados y monitoreados. Sin embargo,
los avances en la simulación del ojo han llevado a costosos simuladores con acceso
limitado ya que la práctica se mantiene con interacciones para un aprendiz
y en algunos casos, ofreciendo al entrenador la visión para la interacción del
practicante. Gracias a los costos asociados a la simulación médica, hay varias
alternativas reportadas en la revisión de la literatura, presentando aproximaciones
efectividad-costo y nivel de consumo para maximizar la efectividad del
entrenamiento para el examen de ojo. En este trabajo se presenta el desarrollo
de una aplicación con realidad aumentada inmersiva y no-inmersiva, para dispositivos
móviles Android con interacciones a través de un controlador impreso
en 3D con componentes electrónicos embebidos que imitan a un oftalmoscopio
real. La aplicación presenta a los usuarios un paciente virtual visitando al doctor
para un examen ocular, y requiere que el aprendiz ejecute el examen de fondo de
ojo haciendo diagnosticando sus hallazgos. La versión inmersiva de la aplicación
requiere del uso de un casco de realidad virtual, además del prototipo 3D de
oftalmoscopio, mientras que la no inmersiva, requiere únicamente del marcador
dentro del campo de visión del dispositivo móvil.The eye examination is an important procedure that provides information about the condition
of the eye by observing its fundus, thus allowing the observation and identification of
abnormalities, such as blindness, diabetes, hypertension, and bleeding resulting from traumas
among others. A proper eye fundus examination allows identifying conditions that may
compromise the sight; however, the eye examination is challenging because it requires extensive
practice to develop adequate interpretation skills that allows successfully identifying
abnormalities at the back of the eye seen through an ophthalmoscope. To assist trainees in
developing the eye examination skills, medical simulation devices are providing training opportunities
to explore numerous eye cases in simulated, controlled, and monitored scenarios.
However, advances in eye simulation have led to expensive simulators with limited access as
practice remain conducted on a one trainee basis in some cases offering the instructor a view
of the trainee interactions. Because of the costs associated with medical simulation, there
various alternatives reported in the literature review presenting cost-effective and consumerlevel
approaches to maximize the effectiveness of the eye examination training. In this work,
we present the development an immersive and non-immersive augmented reality application
for Android mobile devices with interactions through a 3D printed controller with embedded
electronic components that mimics a real ophthalmoscope. The application presents users
with a virtual patient visiting the doctor for an eye examination, and requires the trainees to
perform the eye fundus examination and diagnose their findings. The immersive version of
the application requires the trainees to wear a mobile VR headset and hold the 3D printed
ophthalmoscope, while the non-immersive version requires them to hold the marker within
the field of view of the mobile device.Pregrad
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp