571 research outputs found

    Entorno virtual para diseñar y validar futuras interfaces a bordo para vehículos autónomos

    Full text link
    [EN] This thesis presents a novel synthetic environment for supporting advanced explorations of user interfaces and interaction modalities for future transport systems. The main goal of the work is the definition of novel interfaces solutions designed for increasing trust in self-driving vehicles. The basic idea is to provide insights to the passengers concerning the information available to the Artificial Intelligence (AI) modules on-board of the car, including the driving behaviour of the vehicle and its decision making. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, they have very low degrees of realism for what concerns the digital components of the various traffic scenarios. These includes the visuals of the driving simulator and the behaviours of both other vehicles on the road and pedestrians. High visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future on-board interfaces. An innovative experimental testbed based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected user studies, presenting novel self-driving interfaces and associated user experience results, are described. These explore the capabilities of inducing trust in autonomous vehicles and explore Heads-Up Displays (HUDs), Augmented Reality (ARs) and directional audio solutions. The work includes three core phases focusing on the development of software for the testbed, the definition of relevant interfaces and experiments and focused testing with panels comprising different user demographics. Specific investigations will focus on the design and exploration of a set of alternative visual feedback mechanisms (adopting AR visualizations) to gather information about the surrounding environment and AI decision making. The performances of these will be assessed with real users in respect of their capability to foster trust in the vehicle and on the level of understandability of the provided signals. Moreover, additional accessory studies will focus on the exploration of different designs for triggering driving handover, i.e. the transfer vehicle control from AI to human drivers, which is a central problem in current embodiments of self-driving vehicles.[ES] Esta tesis presenta un nuevo entorno virtual para apoyar exploraciones avanzadas de interfaces de usuario y modalidades de interacción para sistemas de transporte futuros. El objetivo principal del trabajo es la definición de soluciones de Realidad Aumentada diseñadas para aumentar la confianza en los vehículos autónomos. La idea básica es proporcionar información a los pasajeros sobre la información disponible para los módulos de Inteligencia Artificial (AI) a bordo del automóvil, incluido el comportamiento de conducción del vehículo y su toma de decisiones. El trabajo incluye tres fases centrales que se centran en el desarrollo de software para el banco de pruebas, la definición de interfaces y experimentos relevantes y pruebas enfocadas con paneles que comprenden diferentes datos demográficos de los usuarios. El entorno de trabajo específico del banco de pruebas experimental se compone de: - GTA V como entorno de prueba debido a su escenario complejo y sus gráficos hiperrealistas. - Volante y pedales para una conducción activa. - DeepGTA como marco de autocontrol. - Tobii Eye Tracking como dispositivo de entrada para las intenciones de los usuarios. Las investigaciones específicas se centrarán en el diseño y la exploración de un conjunto de mecanismos alternativos de retroalimentación visual (adopción de visualizaciones de AR) para recopilar información sobre el medio ambiente circundante y la toma de decisiones de IA. El rendimiento de estos se evaluará con los usuarios reales con respecto a su capacidad para fomentar la confianza en el vehículo y en el nivel de comprensión de las señales proporcionadas. Además, los estudios complementarios adicionales se centrarán en la exploración de diferentes diseños para activar el traspaso de conducción, es decir, el control del vehículo de transferencia de AI a los conductores humanos, que es un problema central en las realizaciones actuales de vehículos autónomos.Mateu Gisbert, C. (2018). Novel synthetic environment to design and validate future onboard interfaces for self-driving vehicles. http://hdl.handle.net/10251/112327TFG

    AoA-aware Probabilistic Indoor Location Fingerprinting using Channel State Information

    Full text link
    With expeditious development of wireless communications, location fingerprinting (LF) has nurtured considerable indoor location based services (ILBSs) in the field of Internet of Things (IoT). For most pattern-matching based LF solutions, previous works either appeal to the simple received signal strength (RSS), which suffers from dramatic performance degradation due to sophisticated environmental dynamics, or rely on the fine-grained physical layer channel state information (CSI), whose intricate structure leads to an increased computational complexity. Meanwhile, the harsh indoor environment can also breed similar radio signatures among certain predefined reference points (RPs), which may be randomly distributed in the area of interest, thus mightily tampering the location mapping accuracy. To work out these dilemmas, during the offline site survey, we first adopt autoregressive (AR) modeling entropy of CSI amplitude as location fingerprint, which shares the structural simplicity of RSS while reserving the most location-specific statistical channel information. Moreover, an additional angle of arrival (AoA) fingerprint can be accurately retrieved from CSI phase through an enhanced subspace based algorithm, which serves to further eliminate the error-prone RP candidates. In the online phase, by exploiting both CSI amplitude and phase information, a novel bivariate kernel regression scheme is proposed to precisely infer the target's location. Results from extensive indoor experiments validate the superior localization performance of our proposed system over previous approaches

    Safe, Remote-Access Swarm Robotics Research on the Robotarium

    Get PDF
    This paper describes the development of the Robotarium -- a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-agent research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium as well as connects these to the particular considerations one must take when making complex hardware remotely accessible. In particular, safety must be built in already at the design phase without overly constraining which coordinated control programs the users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees.Comment: 13 pages, 7 figures, 3 code samples, 72 reference

    ARK: augmented reality kiosk

    Get PDF
    This paper aims at presenting a very first prototype of an Augmented Reality (AR) system that as been developed in recent months at our research group. The prototype adopts a kiosk format and allows users to directly interact with an AR environment using a conventional data glove. The most relevant feature of this environment is the use of a common monitor to display AR images, instead of employing specific Head-Mounted Displays. By integrating a half-silvered mirror and a black virtual hand, our solution solves the occlusion problem that normally occurs when a user interacts with a virtual environment displayed by a monitor or other projection system

    ARK multi-user

    Get PDF
    This paper presents a monitor-based prototype for the visualisation and interaction of an Augmented Reality (AR) system, which recently developed at CCG and demonstrated during the SIACG2002 conference held in Guimarães, Portugal. ARK – Augmented Reality Kiosk - is a set-up based on the prototypes developed in the European Virtual Showcases project to which direct interaction has been added. A normal monitor and a half-silvered mirror constitute the usual set-up for the kiosk. By integrating a half-silvered mirror and a black virtual hand, the CCG solution solves the occlusion problem that normally occurs when a user interacts with a virtual environment displayed by a monitor or other projection system. Conceived with limited monetary resources this portable solution can be deployed in different application contexts as, for instance, culture heritage. This paper presents an extension of the solution to a multi-user platform for a Portuguese museum

    Visually-Guided Manipulation Techniques for Robotic Autonomous Underwater Panel Interventions

    Get PDF
    The long term of this ongoing research has to do with increasing the autonomy levels for underwater intervention missions. Bearing in mind that the speci c mission to face has been the intervention on a panel, in this paper some results in di erent development stages are presented by using the real mechatronics and the panel mockup. Furthermore, some details are highlighted describing two methodologies implemented for the required visually-guided manipulation algorithms, and also a roadmap explaining the di erent testbeds used for experimental validation, in increasing complexity order, are presented. It is worth mentioning that the aforementioned results would be impossible without previous generated know-how for both, the complete developed mechatronics for the autonomous underwater vehicle for intervention, and the required 3D simulation tool. In summary, thanks to the implemented approach, the intervention system is able to control the way in which the gripper approximates and manipulates the two panel devices (i.e. a valve and a connector) in autonomous manner and, results in di erent scenarios demonstrate the reliability and feasibility of this autonomous intervention system in water tank and pool conditions.This work was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project) and DPI2014-57746-C3 (MERBOTS Project), by Foundation Caixa Castell o-Bancaixa and Universitat Jaume I grant PID2010-12, by Universitat Jaume I PhD grants PREDOC/2012/47 and PREDOC/2013/46, and by Generalitat Valenciana PhD grant ACIF/2014/298. We would like also to acknowledge the support of our partners inside the Spanish Coordinated Projects TRITON and MERBOTS: Universitat de les Illes Balears, UIB (subprojects VISUAL2 and SUPERION) and Universitat de Girona, UdG (subprojects COMAROB and ARCHROV)

    Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects

    Full text link
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available

    Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    Get PDF
    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed
    corecore