11 research outputs found

    SWAG demo : smart watch assisted gesture interaction for mixed reality head-mounted displays

    Get PDF
    In this demonstration, we will show a prototype system with sensor fusion approach to robustly track 6 degrees of freedom of hand movement and support intuitive hand gesture interaction and 3D object manipulation for Mixed Reality head-mounted displays. Robust tracking of hand and finger with egocentric camera remains a challenging problem, especially with self-occlusion – for example, when user tries to grab a virtual object in midair by closing the palm. Our approach leverages the use of a common smart watch worn on the wrist to provide a more reliable palm and wrist orientation data, while fusing the data with camera to achieve robust hand motion and orientation for interaction.Postprin

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    Enhancing tele-operation - Investigating the effect of sensory feedback on performance

    Get PDF
    The decline in the number of healthcare service providers in comparison to the growing numbers of service users prompts the development of technologies to improve the efficiency of healthcare services. One such technology which could offer support are assistive robots, remotely tele-operated to provide assistive care and support for older adults with assistive care needs and people living with disabilities. Tele-operation makes it possible to provide human-in-the-loop robotic assistance while also addressing safety concerns in the use of autonomous robots around humans. Unlike many other applications of robot tele-operation, safety is particularly significant as the tele-operated assistive robots will be used in close proximity to vulnerable human users. It is therefore important to provide as much information about the robot (and the robot workspace) as possible to the tele-operators to ensure safety, as well as efficiency. Since robot tele-operation is relatively unexplored in the context of assisted living, this thesis explores different feedback modalities that may be employed to communicate sensor information to tele-operators. The thesis presents research as it transitioned from identifying and evaluating additional feedback modalities that may be used to supplement video feedback, to exploring different strategies for communicating the different feedback modalities. Due to the fact that some of the sensors and feedback needed are not readily available, different design iterations were carried out to develop the necessary hardware and software for the studies carried out. The first human study was carried out to investigate the effect of feedback on tele-operator performance. Performance was measured in terms of task completion time, ease of use of the system, number of robot joint movements, and success or failure of the task. The effect of verbal feedback between the tele-operator and service users was also investigated. Feedback modalities have differing effects on performance metrics and as a result, the choice of optimal feedback may vary from task to task. Results show that participants preferred scenarios with verbal feedback relative to scenarios without verbal feedback, which also reflects in their performance. Gaze metrics from the study also showed that it may be possible to understand how tele-operators interact with the system based on their areas of interest as they carry out tasks. This findings suggest that such studies can be used to improve the design of tele-operation systems.The need for social interaction between the tele-operator and service user suggests that visual and auditory feedback modalities will be engaged as tasks are carried out. This further reduces the number of available sensory modalities through which information can be communicated to tele-operators. A wrist-worn Wi-Fi enabled haptic feedback device was therefore developed and a study was carried out to investigate haptic sensitivities across the wrist. Results suggest that different locations on the wrist have varying sensitivities to haptic stimulation with and without video distraction, duration of haptic stimulation, and varying amplitudes of stimulation. This suggests that dynamic control of haptic feedback can be used to improve haptic perception across the wrist, and it may also be possible to display more than one type of sensor data to tele-operators during a task. The final study carried out was designed to investigate if participants can differentiate between different types of sensor data conveyed through different locations on the wrist via haptic feedback. The effect of increased number of attempts on performance was also investigated. Total task completion time decreased with task repetition. Participants with prior gaming and robot experience had a more significant reduction in total task completion time when compared to participants without prior gaming and robot experience. Reduction in task completion time was noticed for all stages of the task but participants with additional feedback had higher task completion time than participants without supplementary feedback. Reduction in task completion time varied for different stages of the task. Even though gripper trajectory reduced with task repetition, participants with supplementary feedback had longer gripper trajectories than participants without supplementary feedback, while participants with prior gaming experience had shorter gripper trajectories than participants without prior gaming experience. Perceived workload was also found to reduce with task repetition but perceived workload was higher for participants with feedback reported higher perceived workload than participants without feedback. However participants without feedback reported higher frustration than participants without feedback.Results show that the effect of feedback may not be significant where participants can get necessary information from video feedback. However, participants were fully dependent on feedback when video feedback could not provide requisite information needed.The findings presented in this thesis have potential applications in healthcare, and other applications of robot tele-operation and feedback. Findings can be used to improve feedback designs for tele-operation systems to ensure safe and efficient tele-operation. The thesis also provides ways visual feedback can be used with other feedback modalities. The haptic feedback designed in this research may also be used to provide situational awareness for the visually impaired

    Enabling the Development and Implementation of Digital Twins : Proceedings of the 20th International Conference on Construction Applications of Virtual Reality

    Get PDF
    Welcome to the 20th International Conference on Construction Applications of Virtual Reality (CONVR 2020). This year we are meeting on-line due to the current Coronavirus pandemic. The overarching theme for CONVR2020 is "Enabling the development and implementation of Digital Twins". CONVR is one of the world-leading conferences in the areas of virtual reality, augmented reality and building information modelling. Each year, more than 100 participants from all around the globe meet to discuss and exchange the latest developments and applications of virtual technologies in the architectural, engineering, construction and operation industry (AECO). The conference is also known for having a unique blend of participants from both academia and industry. This year, with all the difficulties of replicating a real face to face meetings, we are carefully planning the conference to ensure that all participants have a perfect experience. We have a group of leading keynote speakers from industry and academia who are covering up to date hot topics and are enthusiastic and keen to share their knowledge with you. CONVR participants are very loyal to the conference and have attended most of the editions over the last eighteen editions. This year we are welcoming numerous first timers and we aim to help them make the most of the conference by introducing them to other participants

    A proposal to improve wearables development time and performance : software and hardware approaches.

    Get PDF
    Programa de P?s-Gradua??o em Ci?ncia da Computa??o. Departamento de Ci?ncia da Computa??o, Instituto de Ci?ncias Exatas e Biol?gicas, Universidade Federal de Ouro Preto.Wearable devices are a trending topic in both commercial and academic areas. Increasing demand for innovation has raised the number of research and products, addressing brandnew challenges, and creating profitable opportunities. Current wearable devices can be employed in solving problems in a wide variety of areas. Such coverage generates a relevant number of requirements and variables that influences solutions performance. It is common to build specific wearable versions to fit each targeting application niche, what requires time and resources. Currently, the related literature does not present ways to treat the hardware/software in a generic way enough to allow both parts reuse. This manuscript presents the proposal of two components focused on hardware/software, respectively, allowing the reuse of di?erent parts of a wearable solution. A platform for wearables development is outlined as a viable way to recycle an existing organization and architecture. The platform use was proven through the creation of a wearable device that was enabled to be used in di?erent contexts of the mining industry. In the software side, a development and customization tool for specific operating systems is demonstrated. This tool aims not only to reuse standard software components but also to provide improved performance simultaneously. A real prototype was designed and created as a manner to validate the concepts. In the results, the comparison between the operating system generated by the tool versus a conventional operating system allows quantifying the improvement rate. The former operating system showed approximate performance gains of 100% in processing tasks, 150% in memory consumption and I/O operations, and approximately 20% of reduction in energy consumption. In the end, performance analysis allows inferring that the proposals presented here contribute to this area, easing the development and reuse of wearable solutions as a whole

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlängst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. Grundsätzlich wollen sich Kameraleute während der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. Während Automatisierung hierbei Experten unterstützen und Anfänger befähigen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domänenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese Einschränkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches Verständnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trägt diese Arbeit zu einem besseren Verständnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit präsentiert dazu konkret drei Beiträge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem Kontrollqualität und -gefühl bestimmt. Darin erweitern wir etablierte Ansätze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, präsentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an Kontrollqualität und -gefühl bei unseren Ansätzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlängst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. Grundsätzlich wollen sich Kameraleute während der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. Während Automatisierung hierbei Experten unterstützen und Anfänger befähigen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domänenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese Einschränkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches Verständnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trägt diese Arbeit zu einem besseren Verständnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit präsentiert dazu konkret drei Beiträge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem Kontrollqualität und -gefühl bestimmt. Darin erweitern wir etablierte Ansätze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, präsentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an Kontrollqualität und -gefühl bei unseren Ansätzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices

    Quantified vehicles: data, services, ecosystems

    Get PDF
    Advancing digitalization has shown the potential of so-called Quantified Vehicles for gathering valuable sensor data about the vehicle itself and its environment. Consequently, (vehicle) Data has become an important resource, which can pave the way to (Data-driven) Services. The (Data-driven Service) Ecosystem of actors that collaborate to ultimately generate services, has only shaped up in recent years. This cumulative dissertation summarizes the author's contributions and includes a synopsis as well as 14 peer-reviewed publications, which contribute to answer the three research questions.Die Digitalisierung hat das Potenzial für Quantified Vehicles aufgezeigt, um Sensordaten über das Fahrzeug selbst und seine Umgebung zu sammeln. Folglich sind (Fahrzeug-)Daten zu einer wichtigen Ressource der Automobilindustrie geworden, da sie auch (datengetriebene) Services ermöglichen. Es bilden sich Ökosysteme von Akteuren, die zusammenarbeiten, um letztlich Services zu generieren. Diese kumulative Dissertation fasst die Beiträge des Autors zusammen und enthält eine Synopsis sowie 14 begutachtete Veröffentlichungen, die zur Beantwortung der drei Forschungsfragen beitragen

    XLIII Jornadas de Automática: libro de actas: 7, 8 y 9 de septiembre de 2022, Logroño (La Rioja)

    Get PDF
    [Resumen] Las Jornadas de Automática (JA) son el evento más importante del Comité Español de Automática (CEA), entidad científico-técnica con más de cincuenta años de vida y destinada a la difusión e implantación de la Automática en la sociedad. Este año se celebra la cuadragésima tercera edición de las JA, que constituyen el punto de encuentro de la comunidad de Automática de nuestro país. La presente edición permitirá dar visibilidad a los nuevos retos y resultados del ámbito, y su uso en un gran número de aplicaciones, entre otras, las energías renovables, la bioingeniería o la robótica asistencial. Además de la componente científica, que se ve reflejada en este libro de actas, las JA son un punto de encuentro de las diferentes generaciones de profesores, investigadores y profesionales, incluyendo la componente social que es de vital importancia. Esta edición 2022 de las JA se celebra en Logroño, capital de La Rioja, región mundialmente conocida por la calidad de sus vinos de Denominación de Origen y que ha asumido el desafío de poder ganar competitividad a través de la transformación verde y digital. Pero también por ser la cuna del castellano e impulsar el Valle de la Lengua con la ayuda de las nuevas tecnologías, entre ellas la Automática Inteligente. Los organizadores de estas JA, pertenecientes al Área de Ingeniería de Sistemas y Automática del Departamento de Ingeniería Eléctrica de la Universidad de La Rioja (UR), constituyen un pilar fundamental en el apoyo a la región para el estudio, implementación y difusión de estos retos. Esta edición, la primera en formato íntegramente presencial después de la pandemia de la covid-19, cuenta con más de 200 asistentes y se celebra a caballo entre el Edificio Politécnico de la Escuela Técnica Superior de Ingeniería Industrial y el Monasterio de Yuso situado en San Millán de la Cogolla, dos marcos excepcionales para la realización de las JA. Como parte del programa científico, dos sesiones plenarias harán hincapié, respectivamente, sobre soluciones de control para afrontar los nuevos retos energéticos, y sobre la calidad de los datos para una inteligencia artificial (IA) imparcial y confiable. También, dos mesas redondas debatirán aplicaciones de la IA y la implantación de la tecnología digital en la actividad profesional. Adicionalmente, destacaremos dos clases magistrales alineadas con tecnología de última generación que serán impartidas por profesionales de la empresa. Las JA también van a albergar dos competiciones: CEABOT, con robots humanoides, y el Concurso de Ingeniería de Control, enfocado a UAVs. A todas estas actividades hay que añadir las reuniones de los grupos temáticos de CEA, las exhibiciones de pósteres con las comunicaciones presentadas a las JA y los expositores de las empresas. Por último, durante el evento se va a proceder a la entrega del “Premio Nacional de Automática” (edición 2022) y del “Premio CEA al Talento Femenino en Automática”, patrocinado por el Gobierno de La Rioja (en su primera edición), además de diversos galardones enmarcados dentro de las actividades de los grupos temáticos de CEA. Las actas de las XLIII Jornadas de Automática están formadas por un total de 143 comunicaciones, organizadas en torno a los nueve Grupos Temáticos y a las dos Líneas Estratégicas de CEA. Los trabajos seleccionados han sido sometidos a un proceso de revisión por pares
    corecore