13 research outputs found

    Mejoras en la Generación de Claves Gravito-inerciales en Simuladores de Vehículos no Aéreos

    Get PDF
    El objetivo fundamental de la realidad virtual (RV) es proporcionar al usuario una sensación completa de pertenencia a un entorno virtual alternativo. Para ello, cualquier aplicación de RV debe estimular, del modo más eficaz posible, todas (o el mayor número de) las claves sensoriales que hacen posible que el usuario sienta como cierta esa pertenencia al mundo virtual alternativo. Aunque existen muchas claves sensoriales y la aportación concreta de cada una de ellas a la percepción global de inmersión y presencia virtual es todavía parcialmente desconocida, la mayoría de aplicaciones de RV suelen centrarse en las claves visuales y sonoras, olvidando, hasta cierto punto, el resto. Las razones de este hecho no son casuales, ya que la generación de claves extra-audiovisuales es técnicamente compleja y en numerosas ocasiones, costosa en términos económicos. Entre las claves extra-audiovisuales más importantes se encuentran las claves gravito-inerciales, relacionadas con la percepción del movimiento y la orientación del cuerpo humano. Este tipo de claves se estimulan habitualmente mediante la construcción de plataformas de movimiento, sobre las que se suele situar al usuario de la simulación. Estas plataformas están dotadas de actuadores que permiten desplazarlas y orientarlas dentro de unos límites. Para el control de los movimientos de la plataforma se diseñan unos algoritmos específicos conocidos habitualmente como algoritmos de generación de claves gravito-inerciales (en inglés Motion Cueing Algorithms - MCA). Aunque este tipo de claves ha sido estudiado e incluido en simuladores desde hace más de 50 años, se ha avanzado menos en este campo que en otros aspectos de la simulación, como la generación de claves audiovisuales. De hecho, cómo simular óptimamente un movimiento ilimitado con un generador de movimiento que debe ceñirse a unos límites físicos, es todavía un problema sin resolver. Hay 3 razones fundamentales para ello. La primera es la naturaleza del problema. Es un problema complejo de optimización con restricciones, cuya solución depende de múltiples factores, entre ellos factores humanos difíciles de medir (y todavía parcialmente desconocidos) relacionados con la percepción del movimiento. La segunda es la falta de un criterio para poder comparar diferentes soluciones. Y la tercera, la multitud de parámetros que se deben ajustar en los distintos algoritmos para poder hacer análisis comparativos relevantes entre ellos. Si uno compara dos de estos algoritmos, y el esfuerzo dedicado a ajustar los parámetros de uno es mucho mayor que el empleado con el otro, la comparación no será relevante. Por otro lado, dado que las primeras aplicaciones comerciales de realidad virtual fueron los simuladores de vuelo, la mayoría de los algoritmos MCA han sido desarrollados y ajustados para vehículos aéreos. Por ello, aunque, estos algoritmos son aplicables a otro tipo de vehículos, no fueron diseñados con ese propósito, por lo que resulta interesante estudiar la aplicación de este tipo de algoritmos a simuladores de vehículos no aéreos, ya que es un campo mucho menos estudiado. El objetivo de esta tesis es mejorar la generación de claves gravito-inerciales en simuladores de vehículos no aéreos. Para conseguirlo, en lugar de proponer nuevos algoritmos de generación de claves gravito-inerciales, se estudiarán nuevas formas de evaluar, de manera objetiva, los algoritmos de generación de claves gravito-inerciales, de forma que se pueda conocer si uno es más apropiado que otro según un determinado criterio. Este criterio objetivo de evaluación se basará en una caracterización previa de la percepción subjetiva de usuarios humanos a la generación de claves gravito-inerciales mediante algoritmos de tipo MCA, de manera que exista una correlación entre el criterio de evaluación y la presencia inducida sobre el usuario por el generador de movimiento. Dado que la evaluación de este tipo de algoritmos puede ser costosa tanto en términos temporales, como económicos, e incluso humanos, además de los citados criterios, se desarrollará un procedimiento de evaluación basado en un simulador de plataforma de movimiento como método para acelerar y simplificar la evaluación y prueba de algoritmos de generación de claves gravito-inerciales. El simulador será una aplicación gráfica en tiempo real, que será validada con dos ejemplos reales de plataforma de 3 y 6 grados de libertad. Finalmente, el criterio de evaluación se empleará para ajustar los parámetros de los algoritmos de generación de claves gravito-inerciales. Al disponer de un criterio objetivo, será más sencillo automatizar el proceso de ajuste de parámetros. Sin embargo, dado que el espacio de parámetros de un algoritmo de este tipo puede llegar a ser muy grande, el problema se convierte en un problema de optimización inabordable desde el punto de vista computacional. Es por ello que desarrollaremos y estudiaremos métodos de búsqueda heurística para la solución del problema. Además, antes de estudiar la evaluación y la asignación de parámetros en algoritmos MCA, se estudiará cómo analizar las necesidades de generación de claves gravito-inerciales en función del tipo de simulador que se desee construir, y cómo aprovechar del mejor modo posible el tipo de plataforma del que se disponga. Ejemplificaremos este análisis mediante el estudio de las necesidades gravito-inerciales de un simulador de bote de rescate y el estudio de una plataforma de 3 grados de libertad

    Optimization of 3-DOF Parallel Motion Devices for Low-Cost Vehicle Simulators

    Get PDF
    Motion generation systems are becoming increasingly important in certain Virtual Reality (VR) applications, such as vehicle simulators. This paper deals with the analysis of the Inverse Kinematics (IK) and the reachable workspace of a three-degrees-of-freedom (3-DOF) parallel manipulator, proposing different transformations and optimizations in order to simplify its use with Motion Cueing Algorithms (MCA) for self-motion generation in VR simulators. The proposed analysis and improvements are performed on a 3-DOF heave-pitch-roll manipulator with rotational motors, commonly used for low-cost motion-based commercial simulators. The analysis has been empirically validated against a real 3-DOF parallel manipulator in our labs using an optical tracking system. The described approach can be applied to any kind of 3-DOF parallel manipulator, or even to 6-DOF parallel manipulators. Moreover, the analysis includes objective measures (safe zones) on the workspace volume that can provide a simple but efficient way of comparing the kinematic capabilities of different kinds of motion platforms for this particular application

    A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens

    Get PDF
    The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system which is currently installed in a shopping mall in Spain has been tested by different users

    A VR-enhanced rollover car simulator and edutainment application for increasing seat belt use awareness

    Get PDF
    Most countries have active road safety policies that seek the objective of reducing deaths in traffic accidents. One of the main factors in this regard is the awareness of the safety measures, one of the most important being the correct usage of the seat belt, a device that is known to save thousands of lives every year. The presented work shows a VR-enhanced edutainment application designed to increase awareness on the use of seat belts. For this goal, a motorized rollover system was developed that, synchronized with a VR application (shown in a head-mounted display for each user inside a real car), rolls over this car with up to four passengers inside. This way, users feel the sensations of a real overturn and therefore they realize the consequences and the results of not wearing a seat belt. The system was tested for a month in the context of a road safety exhibition in Dammam, Saudi Arabia, one of the leading countries in car accidents per capita. More than 500 users tested and assessed the usefulness of the system. We measured, before and after the rollover experience, the perception of risk of not using the seat belt. Results show that awareness regarding the use of seat belts increases very significantly after using the presented edutainment tool

    A Case Study on Vestibular Sensations in Driving Simulators

    Get PDF
    Motion platforms have been used in simulators of all types for several decades. Since it is impossible to reproduce the accelerations of a vehicle without limitations through a physically limited system (platform), it is common to use washout filters and motion cueing algorithms (MCA) to select which accelerations are reproduced and which are not. Despite the time that has passed since their development, most of these algorithms still use the classical washout algorithm. In the use of these MCAs, there is always information that is lost and, if that information is important for the purpose of the simulator (the training simulators), the result obtained by the users of that simulator will not be satisfactory. This paper shows a case study where a BMW 325Xi AUT fitted with a sensor, recorded the accelerations produced in all degrees of freedom (DOF) during several runs, and data have been introduced in mathematical simulation software (washout + kinematics + actuator simulation) of a 6DOF motion platform. The input to the system has been qualitatively compared with the output, observing that most of the simulation adequately reflects the input to the system. Still, there are three events where the accelerations are lost. These events are considered by experts to be of vital importance for the outcome of a learning process in the simulator to be adequat

    Addressing the Occlusion Problem in Augmented Reality Environments with Phantom Hollow Objects

    Get PDF
    Occlusion handling is essential to provide a seamless integration of virtual and real objects in AR applications. Different approaches have been presented with a variety of technologies, environment conditions and methods. Among these methods, 3D model-based occlusion approaches have been extensively used. However, these solutions could be too time-consuming in certain situations, since they must render all the occlusion objects even though they are invisible. For this reason, we propose an inverse 3D model-based solution for handling occlusions, designed for those AR applications in which virtual objects are placed inside a real object with holes or windows. With this restriction, the occlusion problem could be solved by rendering the geometry of transparent/hollow objects instead of rendering the opaque geometry. The method has been tested in a real case study with an augmented car in which the virtual content is shown in the interior of the vehicle. Results show that our method outperforms the traditional method, proving that this approach is an efficient option for solving the occlusion problem in certain AR applications

    An empirical evaluation of two natural hand interaction systems in augmented reality

    Get PDF
    Human-computer interaction based on hand gesture tracking is not uncommon in Augmented Reality. In fact, the most recent optical Augmented Reality devices include this type of natural interaction. However, due to hardware and system limitations, these devices, more often than not, settle for semi-natural interaction techniques, which may not always be appropriate for some of the tasks needed in Augmented Reality applications. For this reason, we compare two different optical Augmented Reality setups equipped with hand tracking. The first one is based on a Microsoft HoloLens (released in 2016) and the other one is based on a Magic Leap One (released more than two years later). Both devices offer similar solutions for the visualization and registration problems but differ in the hand tracking approach, since the former uses a metaphoric hand-gesture tracking and the latter relies on an isomorphic approach. We raise seven research questions regarding these two setups, which we answer after performing two task-based experiments using virtual elements, of different sizes, that are moved using natural hand interaction. The questions deal with the accuracy and performance achieved with these setups and also with user preference, recommendation and perceived usefulness. For this purpose, we collect both subjective and objective data about the completion of these tasks. Our initial hypothesis was that there would be differences, in favor of the isomorphic and newer setup, in the use of hand interaction. However, the results surprisingly show that there are very small objective differences between these setups, and the isomorphic approach is not significantly better in terms of accuracy and mistakes, although it allows a faster completion of one of the tasks. In addition, no remarkable statistically significant differences can be found between the two setups in the subjective datasets gathered through a specific questionnaire. We also analyze the opinions of the participants in terms of usefulness, preference and recommendation. The results show that, although the Magic Leap-based system gets more support, the differences are not statistically significant

    Mixed Reality Annotation of Robotic-Assisted Surgery videos with real- time tracking and stereo matching

    Get PDF
    Robotic-Assisted Surgery (RAS) is beginning to unlock its potential. However, despite the latest advances in RAS, the steep learning curve of RAS devices remains a problem. A common teaching resource in surgery is the use of videos of previous procedures, which in RAS are almost always stereoscopic. It is important to be able to add virtual annotations onto these videos so that certain elements of the surgical process are tracked and highlighted during the teaching session. Including virtual annotations in stereoscopic videos turns them into Mixed Reality (MR) experiences, in which tissues, tools and procedures are better observed. However, an MR-based annotation of objects requires tracking and some kind of depth estimation. For this reason, this paper proposes a real-time hybrid tracking–matching method for performing virtual annotations on RAS videos. The proposed method is hybrid because it combines tracking and stereo matching, avoiding the need to calculate the real depth of the pixels. The method was tested with six different state-of-the-art trackers and assessed with videos of a sigmoidectomy of a sigma neoplasia, performed with a Da Vinci® X surgical system. Objective assessment metrics are proposed, presented and calculated for the different solutions. The results show that the method can successfully annotate RAS videos in real-time. Of all the trackers tested for the presented method, the CSRT (Channel and Spatial Reliability Tracking) tracker seems to be the most reliable and robust in terms of tracking capabilities. In addition, in the absence of an absolute ground truth, an assessment with a domain expert using a novel continuous-rating method with an Oculus Quest 2 Virtual Reality device was performed, showing that the depth perception of the virtual annotations is good, despite the fact that no absolute depth values are calculated

    MIME: A mixed-space collaborative system with three immersion levels and multiple users

    Get PDF
    Shared spaces for remote collaboration are nowadays possible by considering a variety of users, devices, immersion systems, interaction capabilities, navigation paradigms, etc. There is a substantial amount of research done in this line, proposing different solutions. However, still a more general solution that considers the heterogeneity of the involved actors/items is lacking. In this paper, we present MIME, a mixed-space tri-collaborative system. Differently from other mixed-space systems, MIME considers three different types of users (in different locations) according to the level of immersion in the system, who can interact simultaneously – what we call a tri-collaboration. For the three types, we provide a solution to navigate, point at objects/locations and make annotations, while users are able to see a virtual representation of the rest of users. Additionally, the total number of users that can simultaneously interact with the system is only restricted by the available hardware, i.e., various users of the same type can be simultaneously connected to the system. We have conducted a preliminary study at the laboratory level, showing that MIME is a promising tool that can be used in many real cases for different purposes.Shared spaces for remote collaboration are nowadays possible by considering a variety of users, devices, immersion systems, interaction capabilities, navigation paradigms, etc. There is a substantial amount of research done in this line, proposing different solutions. However, still a more general solution that considers the heterogeneity of the involved actors/items is lacking. In this paper, we present MIME, a mixed-space tri-collaborative system. Differently from other mixed-space systems, MIME considers three different types of users (in different locations) according to the level of immersion in the system, who can interact simultaneously – what we call a tri-collaboration. For the three types, we provide a solution to navigate, point at objects/locations and make annotations, while users are able to see a virtual representation of the rest of users. Additionally, the total number of users that can simultaneously interact with the system is only restricted by the available hardware, i.e., various users of the same type can be simultaneously connected to the system. We have conducted a preliminary study at the laboratory level, showing that MIME is a promising tool that can be used in many real cases for different purposes

    Multi-Purpose Ontology-Based Visualization of Spatio-Temporal Data: A Case Study on Silk Heritage

    No full text
    Due to the increasing use of data analytics, information visualization is getting more and more important. However, as data get more complex, so does visualization, often leading to ad hoc and cumbersome solutions. A recent alternative is the use of the so-called knowledge-assisted visualization tools. In this paper, we present STMaps (Spatio-Temporal Maps), a multipurpose knowledge-assisted ontology-based visualization tool of spatio-temporal data. STMaps has been (originally) designed to show, by means of an interactive map, the content of the SILKNOW project, a European research project on silk heritage. It is entirely based on ontology support, as it gets the source data from an ontology and uses also another ontology to define how data should be visualized. STMaps provides some unique features. First, it is a multi-platform application. It can work embedded in an HTML page and can also work as a standalone application over several computer architectures. Second, it can be used for multiple purposes by just changing its configuration files and/or the ontologies on which it works. As STMaps relies on visualizing spatio-temporal data provided by an ontology, the tool could be used to visualize the results of any domain (in other cultural and non-cultural contexts), provided that its datasets contain spatio-temporal information. The visualization mechanisms can also be changed by changing the visualization ontology. Third, it provides different solutions to show spatio-temporal data, and also deals with uncertain and missing information. STMaps has been tested to browse silk-related objects, discovering some interesting relationships between different objects, showing the versatility and power of the different visualization tools proposed in this paper. To the best of our knowledge, this is also the first ontology-based visualization tool applied to silk-related heritage
    corecore