1,522 research outputs found

    Driving-Style Assessment from a Motion Sickness Perspective Based on Machine Learning Techniques

    Get PDF
    Ride comfort improvement in driving scenarios is gaining traction as a research topic. This work presents a direct methodology that utilizes measured car signals and combines data processing techniques and machine learning algorithms in order to identify driver actions that negatively affect passenger motion sickness. The obtained clustering models identify distinct driving patterns and associate them with the motion sickness levels suffered by the passenger, allowing a comfort-based driving recommendation system that reduces it. The designed and validated methodology shows satisfactory results, achieving (from a real datasheet) trained models that identify diverse interpretable clusters, while also shedding light on driving pattern differences. Therefore, a recommendation system to improve passenger motion sickness is proposed.This research was funded by the Basque Government; partial support of this work was received from the project KK-2021/00123 Autoeval and the University of the Basque Country UPV/EHU, grant GIU21/007

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Design and Electronic Implementation of Machine Learning-based Advanced Driving Assistance Systems

    Get PDF
    200 p.Esta tesis tiene como objetivo contribuir al desarrollo y perfeccionamiento de sistemas avanzados a la conducción (ADAS). Para ello, basándose en bases de datos de conducción real, se exploran las posibilidades de personalización de los ADAS existentes mediante técnicas de machine learning, tales como las redes neuronales o los sistemas neuro-borrosos. Así, se obtienen parámetros característicos del estilo cada conductor que ayudan a llevar a cabo una personalización automatizada de los ADAS que equipe el vehículo, como puede ser el control de crucero adaptativo. Por otro lado, basándose en esos mismos parámetros de estilo de conducción, se proponen nuevos ADAS que asesoren a los conductores para modificar su estilo de conducción, con el objetivo de mejorar tanto el consumo de combustible y la emisión de gases de efecto invernadero, como el confort de marcha. Además, dado que esta personalización tiene como objetivo que los sistemas automatizados imiten en cierta manera, y siempre dentro de parámetros seguros, el estilo del conductor humano, se espera que contribuya a incrementar la aceptación de estos sistemas, animando a la utilización y, por tanto, contribuyendo positivamente a la mejora de la seguridad, de la eficiencia energética y del confort de marcha. Además, estos sistemas deben ejecutarse en una plataforma que sea apta para ser embarcada en el automóvil, y, por ello, se exploran las posibilidades de implementación HW/SW en dispositivos reconfigurables tipo FPGA. Así, se desarrollan soluciones HW/SW que implementan los ADAS propuestos en este trabajo con un alto grado de exactitud, rendimiento, y en tiempo real

    Challenges in passenger use of mixed reality headsets in cars and other transportation

    Get PDF
    This paper examines key challenges in supporting passenger use of augmented and virtual reality headsets in transit. These headsets will allow passengers to break free from the restraints of physical displays placed in constrained environments such as cars, trains and planes. Moreover, they have the potential to allow passengers to make better use of their time by making travel more productive and enjoyable, supporting both privacy and immersion. However, there are significant barriers to headset usage by passengers in transit contexts. These barriers range from impediments that would entirely prevent safe usage and function (e.g. motion sickness) to those that might impair their adoption (e.g. social acceptability). We identify the key challenges that need to be overcome and discuss the necessary resolutions and research required to facilitate adoption and realize the potential advantages of using mixed reality headsets in transit

    Virtual Reality Based Simulation Testbed for Evaluation of Autonomous Vehicle Behavior Algorithms

    Get PDF
    Validation of Autonomous Vehicle behavior algorithms requires thorough testing against a wide range of test scenarios. It is not financially and practically feasible to conduct these tests entirely in a real world setting. We discuss the design and implementation of a VR based simulation testbed that allows such testing to be conducted virtually, linking a computer-generated environment to the system running the autonomous vehicle\u27s decision making algorithms and operating in real-time. We illustrate the system by further discussing the design and implementation of an application that builds upon the VR simulation testbed to visually evaluate the performance of an Advance Driver Assist System (ADAS), namely Cooperative Adaptive Cruise Control (CACC) controller against an actor using vehicular navigation data from real traffic within a virtual 3D environment of Clemson University\u27s campus. With this application, our goal is to enable the user to achieve spatial awareness and immersion of physically being inside a test car within a realistic traffic scenario in a safe, inexpensive and repeatable manner in Virtual Reality. Finally, we evaluate the performance of our simulator application and conduct a user study to assess its usability

    Designing passenger experiences for in-car Mixed Reality

    Get PDF
    In day-to-day life, people spend a considerable amount of their time on the road. People seek to invest travel time for work and well-being through interaction with mobile and multimedia applications on personal devices such as smartphones and tablets. However, for new computing paradigms, such as mobile mixed reality (MR), their usefulness in this everyday transport context, in-car MR remains challenging. When future passengers immerse in three-dimensional virtual environments, they become increasingly disconnected from the cabin space, vehicle motion, and other people around them. This degraded awareness of the real environment endangers the passenger experience on the road, which initially motivates this thesis to question: can immersive technology become useful in the everyday transport context, such as for in-car scenarios? If so, how should we design in-car MR technology to foster passenger access and connectedness to both physical and virtual worlds, ensuring ride safety, comfort, and joy? To this aim, this thesis contributes via three aspects: 1) Understanding passenger use of in-car MR —first, I present a model for in-car MR interaction through user research. As interviews with daily commuters reveal, passengers are concerned with their physical integrity when facing spatial conflicts between borderless virtual environments and the confined cabin space. From this, the model aims to help researchers spatially organize information and how user interfaces vary in the proximity of the user. Additionally, a field experiment reveals contextual feedback about motion sickness when using immersive technology on the road. This helps refine the model and instruct the following experiments. 2) Mixing realities in car rides —second, this thesis explores a series of prototypes and experiments to examine how in-car MR technology can enable passengers to feel present in virtual environments while maintaining awareness of the real environment. The results demonstrate technical solutions for physical integrity and situational awareness by incorporating essential elements of the RE into virtual reality. Empirical evidence provides a set of dimensions into the in-car MR model, guiding the design decisions of mixing realities. 3) Transcending the transport context —third, I extend the model to other everyday contexts beyond transport that share spatial and social constraints, such as the confined and shared living space at home. A literature review consolidates leveraging daily physical objects as haptic feedback for MR interaction across spatial scales. A laboratory experiment discovers how context-aware MR systems that consider physical configurations can support social interaction with copresent others in close shared spaces. These results substantiate the scalability of the in-car MR model to other contexts. Finally, I conclude with a holistic model for mobile MR interaction across everyday contexts, from home to on the road. With my user research, prototypes, empirical evaluation, and model, this thesis paves the way for understanding the future passenger use of immersive technology, addressing today’s technical limitations of MR in mobile interaction, and ultimately fostering mobile users’ ubiquitous access and close connectedness to MR anytime and anywhere in their daily lives.Im modernen Leben verbringen die Menschen einen beträchtlichen Teil ihrer Zeit mit dem täglichen Pendeln. Die Menschen versuchen, die Reisezeit für ihre Arbeit und ihr Wohlbefinden durch die Interaktion mit mobilen und multimedialen Anwendungen auf persönlichen Geräten wie Smartphones und Tablets zu nutzen. Doch für neue Computing-Paradigmen, wie der mobilen Mixed Reality (MR), bleibt ihre Nützlichkeit in diesem alltäglichen Verkehrskontext, der MR im Auto, eine Herausforderung. Wenn künftige Passagiere in dreidimensionale virtuelle Umgebungen eintauchen, werden sie zunehmend von der Kabine, der Fahrzeugbewegung und den Menschen in ihrer Umgebung abgekoppelt. Diese verminderte Wahrnehmung der realen Umgebung gefährdet das Fahrverhalten der Passagiere im Straßenverkehr, was diese Arbeit zunächst zu der Frage motiviert: Können immersive Systeme im alltäglichen Verkehrskontext, z.B. in Fahrzeugszenarien, nützlich werden? Wenn ja, wie sollten wir die MR-Technologie im Auto gestalten, um den Zugang und die Verbindung der Passagiere mit der physischen und der virtuellen Welt zu fördern und dabei Sicherheit, Komfort und Freude an der Fahrt zu gewährleisten? Zu diesem Zweck trägt diese Arbeit zu drei Aspekten bei: 1) Verständnis der Nutzung von MR im Auto durch die Passagiere - Zunächst wird ein Modell für die MR-Interaktion im Auto durch user research vorgestellt. Wie aus Interviews mit täglichen Pendlern hervorgeht, sind die Passagiere um ihre körperliche Unversehrtheit besorgt, wenn sie mit räumlichen Konflikten zwischen grenzenlosen virtuellen Umgebungen und dem begrenzten Kabinenraum konfrontiert werden. Das Modell soll Forschern dabei helfen, Informationen und Benutzerschnittstellen räumlich zu organisieren, die in der Nähe des Benutzers variieren. Darüber hinaus zeigt ein Feldexperiment kontextbezogenes Feedback zur Reisekrankheit bei der Nutzung immersiver Technologien auf der Straße. Dies hilft, das Modell zu verfeinern und die folgenden Experimente zu instruieren. 2) Vermischung von Realitäten bei Autofahrten - Zweitens wird in dieser Arbeit anhand einer Reihe von Prototypen und Experimenten untersucht, wie die MR-Technologie im Auto es den Passagieren ermöglichen kann, sich in virtuellen Umgebungen präsent zu fühlen und gleichzeitig das Bewusstsein für die reale Umgebung zu behalten. Die Ergebnisse zeigen technische Lösungen für räumliche Beschränkungen und Situationsbewusstsein, indem wesentliche Elemente der realen Umgebung in VR integriert werden. Die empirischen Erkenntnisse bringen eine Reihe von Dimensionen in das Modell der MR im Auto ein, die die Designentscheidungen für gemischte Realitäten leiten. 3) Über den Verkehrskontext hinaus - Drittens erweitere ich das Modell auf andere Alltagskontexte jenseits des Verkehrs, in denen räumliche und soziale Zwänge herrschen, wie z.B. in einem begrenzten und gemeinsam genutzten Wohnbereich zu Hause. Eine Literaturrecherche konsolidiert die Nutzung von Alltagsgegenständen als haptisches Feedback für MR-Interaktion über räumliche Skalen hinweg. Ein Laborexperiment zeigt, wie kontextbewusste MR-Systeme, die physische Konfigurationen berücksichtigen, soziale Interaktion mit anderen Personen in engen gemeinsamen Räumen ermöglichen. Diese Ergebnisse belegen die Übertragbarkeit des MR-Modells im Auto auf andere Kontexte. Schließlich schließe ich mit einem ganzheitlichen Modell für mobile MR-Interaktion in alltäglichen Kontexten, von zu Hause bis unterwegs. Mit meiner user research, meinen Prototypen und Evaluierungsexperimenten sowie meinem Modell ebnet diese Dissertation den Weg für das Verständnis der zukünftigen Nutzung immersiver Technologien durch Passagiere, für die Überwindung der heutigen technischen Beschränkungen von MR in der mobilen Interaktion und schließlich für die Förderung des allgegenwärtigen Zugangs und der engen Verbindung der mobilen Nutzer zu MR jederzeit und überall in ihrem täglichen Leben

    Motion Generation and Planning System for a Virtual Reality Motion Simulator: Development, Integration, and Analysis

    Get PDF
    In the past five years, the advent of virtual reality devices has significantly influenced research in the field of immersion in a virtual world. In addition to the visual input, the motion cues play a vital role in the sense of presence and the factor of engagement in a virtual environment. This thesis aims to develop a motion generation and planning system for the SP7 motion simulator. SP7 is a parallel robotic manipulator in a 6RSS-R configuration. The motion generation system must be able to produce accurate motion data that matches the visual and audio signals. In this research, two different system workflows have been developed, the first for creating custom visual, audio, and motion cues, while the second for extracting the required motion data from an existing game or simulation. Motion data from the motion generation system are not bounded, while motion simulator movements are limited. The motion planning system commonly known as the motion cueing algorithm is used to create an effective illusion within the limited capabilities of the motion platform. Appropriate and effective motion cues could be achieved by a proper understanding of the perception of human motion, in particular the functioning of the vestibular system. A classical motion cueing has been developed using the model of the semi-circular canal and otoliths. A procedural implementation of the motion cueing algorithm has been described in this thesis. We have integrated all components together to make this robotic mechanism into a VR motion simulator. In general, the performance of the motion simulator is measured by the quality of the motion perceived on the platform by the user. As a result, a novel methodology for the systematic subjective evaluation of the SP7 with a pool of juries was developed to check the quality of motion perception. Based on the results of the evaluation, key issues related to the current configuration of the SP7 have been identified. Minor issues were rectified on the flow, so they were not extensively reported in this thesis. Two major issues have been addressed extensively, namely the parameter tuning of the motion cueing algorithm and the motion compensation of the visual signal in virtual reality devices. The first issue was resolved by developing a tuning strategy with an abstraction layer concept derived from the outcome of the novel technique for the objective assessment of the motion cueing algorithm. The origin of the second problem was found to be a calibration problem of the Vive lighthouse tracking system. So, a thorough experimental study was performed to obtain the optimal calibrated environment. This was achieved by benchmarking the dynamic position tracking performance of the Vive lighthouse tracking system using an industrial serial robot as a ground truth system. With the resolution of the identified issues, a general-purpose virtual reality motion simulator has been developed that is capable of creating custom visual, audio, and motion cues and of executing motion planning for a robotic manipulator with a human motion perception constraint

    On-road virtual reality autonomous vehicle (VRAV) simulator : An empirical study on user experience

    Get PDF
    Autonomous-vehicle (AV) technologies are rapidly advancing, but a great deal remains to be learned about their interaction and perception on public roads. Research in this area usually relies on AV trials using naturalistic driving which are expensive with various legal and ethical obstacles designed to keep the general public safe. The emerging concept of Wizard-of-Oz simulation is a promising solution to this problem wherein the driver of a standard vehicle is hidden from the passenger using a physical partition, providing the illusion of riding in an AV. Furthermore, head-mounted display (HMD) virtual reality (VR) has been proposed as a means of providing a Wizard-of-Oz protocol for on-road simulations of AVs. Such systems have potential to support a variety of study conditions at low cost, enabling simulation of a variety of vehicles, driving conditions, and circumstances. However, the feasibility of such systems has yet to be shown. This study makes use of a within-subjects factorial design for examining and evaluating a virtual reality autonomous vehicle (VRAV) system, with the aim of better understanding the differences between stationary and on-road simulations, both with and without HMD VR. More specifically, this study examines the effects on user experience of conditions including presence, arousal, simulator sickness and task workload. Participants indicated a realistic and immersive driving experience as part of subjective evaluation of the VRAV system, indicating the system is a promising tool for human-automation interaction and future AV technology developments.acceptedVersionPeer reviewe

    Virtual reality web application for automotive data visualization

    Get PDF
    Hoje em dia, a indústria de automóveis é uma das maiores empresas económicas em constante desenvolvimento no mundo inteiro. Inúmeras empresas necessitam de dados sobre a condução autónoma para aprender, testar e melhorar ao criar novos veículos. A condução autónoma vai desempenhar um grande papel no futuro, e o nosso objetivo é continuar este constante crescimento ao criar aplicações de visualização de dados que mostram resultados de experiências com veículos autónomos. Atualmente, existe uma aplicação na Altran, para visualizar dados online através do WebGL. No entanto, a visualização dos elementos e a sequência em 3D é confusa e não é compreensível o suficiente, como seria com a ajuda de realidade virtual. Isto acontece porque os elementos são disponibilizados como linhas, de acordo com a distância entre os objetos e o chão, em vez de suas formas reais. Além disto, a infraestrutura de dados pode conter um tamanho abundante de elementos, como nuvens de pontos e trajetórias, e a visualização destes elementos serão exibidos com mais eficácia, usando técnicas de realidade virtual. Isto também complementará a grande variedade atual de ferramentas relacionadas à demonstração de automóveis na Altran, contribuindo para apresentações públicas dentro da empresa. Para fortalecer esta visualização de dados 3D, espera-se criar uma aplicação, com a ajuda de técnicas de realidade virtual. A compatibilidade entre WebGL e WebXR será posta à prova e caso mostre ser bem sucedida, os dados da aplicação anteriormente mencionada serão reutilizados. Além de ser capaz de mostrar os dados necessários, essa nova estrutura de realidade virtual deverá manter as funcionalidades da aplicação WebGL e conter outras, como sustentar diferentes tipos de nuvens de pontos e carregar dados de várias fontes. A usabilidade dos dispositivos de realidade virtual e a sua performance será também avaliada, além de uma comparação detalhada entre ambas as aplicações.Nowadays, the automotive industry is one of the largest and continuously ongoing economic sectors in the world. Countless companies need and look for autonomous driving data to learn and test when creating new motor vehicles. Autonomous driving will play a big role in the future, and we can help continue its constant growth with with data visualization applications that display results of automotive experiments. Currently, there's an application in Altran, to visualize autonomous driving data online using WebGL. However, the showcasing of the elements and the sequence in 3D is confusing and not comprehensible enough, like it would be while using virtual reality. This happens as a result of the elements being displayed as lines according to their distance between the objects and the ground, instead of their real shapes. Since the data infrastructure can contain an abundant size of elements, like point-clouds and trajectories, the visualization of this data will be displayed more efficaciously, using virtual reality techniques. This will also complement the current large variety of tools concerning automotive demonstration in Altran, contributing to related public presentations in the company. To strengthen this 3D data visualization, this project aims to create a VR application that displays autonomous driving data. The compatibility between WebGL and WebXR will be tested, and in case this proves successful, the current structure of the application mentioned above will be reused. Besides being able to show the necessary automotive data, this new virtual reality structure, should maintain the functionalities of the WebGL application and contain extra features, like supporting different types of point-clouds and loading data from multiple sources. The usability of the Virtual Reality devices and its performance will also be evaluated, as well a comparison between the WebGL application
    corecore