8 research outputs found

    Interaction avec un picoprojecteur : État de l'art et analyse des attentes des utilisateurs

    Get PDF
    4 pagesNational audienceUn picoprojecteur est un vidéoprojecteur portatif aux dimensions réduites. On appelle également picophone, un smartphone qui intègre un tel dispositif. Encore très peu diffusé, ce nouveau dispositif interactif mobile est commercialisé depuis 2 ans. Cet article dresse un état de l'art des recherches actuellement menées sur le sujet, puis présente les attentes d'un panel de 50 utilisateurs potentiels

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    Intermediated reality

    Get PDF
    Real-time solutions to reducing the gap between virtual and physical worlds for photorealistic interactive Augmented Reality (AR) are presented. First, a method of texture deformation with image inpainting, provides a proof of concept to convincingly re-animate fixed physical objects through digital displays with seamless visual appearance. This, in combination with novel methods for image-based retargeting of real shadows to deformed virtual poses and environment illumination estimation using in conspicuous flat Fresnel lenses, brings real-world props to life in compelling, practical ways. Live AR animation capability provides the key basis for interactive facial performance capture driven deformation of real-world physical facial props. Therefore, Intermediated Reality (IR) is enabled; a tele-present AR framework that drives mediated communication and collaboration for multiple users through the remote possession of toys brought to life.This IR framework provides the foundation of prototype applications in physical avatar chat communication, stop-motion animation movie production, and immersive video games. Specifically, a new approach to reduce the number of physical configurations needed for a stop-motion animation movie by generating the in-between frames digitally in AR is demonstrated. AR-generated frames preserve its natural appearance and achieve smooth transitions between real-world keyframes and digitally generated in-betweens. Finally, the methods integrate across the entire Reality-Virtuality Continuum to target new game experiences called Multi-Reality games. This gaming experience makes an evolutionary step toward the convergence of real and virtual game characters for visceral digital experiences

    Design for Child-Robot Play The implications of Design Research within the field of Human-Robot Interaction studies for Children

    Get PDF
    This thesis investigates the intersections of three disciplines, that are Design Research, Human-Robot Interaction studies, and Child Studies. In particular, this doctoral research is focused on two research questions, namely, what is (or might be) the role of design research in HRI? And, how to design acceptable and desirable child-robot play applications? The first chapter introduces an overview of the mutual interest between robotics and design that is at the basis of the research. On the one hand, the interest of design toward robotics is documented through some exemplary projects from artists and designers that speculate on the human-robot coexistence condition. Vice versa, the robotics interest toward design is documented by referring to some tracks of robotic conferences, scienti c workshops and robotics journals which focused on the design-robotics relationship. Finally, a brief description of the background conditions that characterized this doctoral research are introduced, such as the fact of being a research founded by a company. The second chapter provides an overview of the state of the art of the intersections between three multidisciplinary disciplines. First, a de nition of Design Research is provided, together with its main trends and open issues. Then, the review focuses on the contribution of Design Research to the HRI eld, which can be summed up in actions focused on three aspects: artefacts, stakeholders, and contexts. This is followed by a focus on the role of Design Research within the context of children studies, in which it is possible to identify two main design-child relationships: design as a method for developing children’s learning experiences; and children as part of the design process for developing novel interactive systems. The third chapter introduces the Research through Design (RtD) approach and its relevance in conducting design research in HRI. The proposed methodology, based on this approach, is particularly characterized by the presence of design explorations as study methods. These, in turn, are developed through a common project’s methodology, also reported in this chapter. The fourth chapter is dedicated to the analysis of the scenario in which the child-robot interaction takes place. This was aimed at understanding what is edutainment robotics for children, its common features, how it relates to existing children play types, and where the interaction takes place. The chapter provides also a focus on the relationship between children and technology on a more general level, through which two themes and relative design opportunities were identi ed: physically active play and objects-to-think-with. These were respectively addressed in the two design explorations presented in this thesis: Phygital Play and Shybo. The Phygital Play project consists of an exploration of natural interaction modalities with robots, through mixed-reality, for fostering children’s active behaviours. To this end, a game platform was developed for allowing children to play with or against a robot, through body movement. Shybo, instead, is a low-anthropomorphic robot for playful learning activities with children that can be carried out in educational contexts. The robot, which reacts to properties of the physical environment, is designed to support different kinds of experiences. Then, the chapter eight is dedicated to the research outcomes, that were de ned through a process of reflection. The contribution of the research was analysed and documented by focusing on three main levels, namely: artefact, knowledge and theory. The artefact level corresponds to the situated implementations developed through the projects. The knowledge level consists of a set of actionable principles, emerged from the results and lessons learned from the projects. At the theory level, a theoretical framework was proposed with the aim of informing the future design of child- robot play applications. Thelastchapterprovidesa naloverviewofthe doctoral research, a series of limitations regarding the research, its process and its outcomes, and some indications for future research

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    Augmented reality at the workplace : a context-aware assistive system using in-situ projection

    Get PDF
    Augmented Reality has been used for providing assistance during manual assembly tasks for more than 20 years. Due to recent improvements in sensor technology, creating context-aware Augmented Reality systems, which can detect interaction accurately, becomes possible. Additionally, the increasing amount of variants of assembled products and being able to manufacture ordered products on demand, leads to an increasing complexity for assembly tasks at industrial assembly workplaces. The resulting need for cognitive support at workplaces and the availability of robust technology enables us to address real problems by using context-aware Augmented Reality to support workers during assembly tasks. In this thesis, we explore how assistive technology can be used for cognitively supporting workers in manufacturing scenarios. By following a user-centered design process, we identify key requirements for assistive systems for both continuously supporting workers and teaching assembly steps to workers. Thereby, we analyzed three different user groups: inexperienced workers, experienced workers, and workers with cognitive impairments. Based on the identified requirements, we design a general concept for providing cognitive assistance at workplaces which can be applied to multiple scenarios. For applying the proposed concept, we present four prototypes using a combination of in-situ projection and cameras for providing feedback to workers and to sense the workers' interaction with the workplace. Two of the prototypes address a manual assembly scenario and two prototypes address an order picking scenario. For the manual assembly scenario, we apply the concept to a single workplace and an assembly cell, which connects three single assembly workplaces to each other. For the order picking scenario, we present a cart-mounted prototype using in-situ projection to display picking information directly onto the warehouse. Further, we present a user-mounted prototype, exploring the design-dimension of equipping the worker with technology rather than equipping the environment. Besides the system contribution of this thesis, we explore the benefits of the created prototypes through studies with inexperienced workers, experienced workers, and cognitively impaired workers. We show that a contour visualization of in-situ feedback is the most suitable for cognitively impaired workers. Further, these contour instructions enable the cognitively impaired workers to perform assembly tasks with a complexity of up to 96 work steps. For inexperienced workers, we show that a combination of haptic and visual error feedback is appropriate to communicate errors that were made during assembly tasks. For creating interactive instructions, we introduce and evaluate a Programming by Demonstration approach. Investigating the long-term use of in-situ instructions at manual assembly workplaces, we show that instructions adapting to the workers' cognitive needs is beneficial, as continuously presenting instructions has a negative impact on the performance of both experienced and inexperienced workers. In the order picking scenario, we show that the cart-mounted in-situ instructions have a great potential as they outperform the paper-baseline. Finally, the user-mounted prototype results in a lower perceived cognitive load. Over the course of the studies, we recognized the need for a standardized way of evaluating Augmented Reality instructions. To address this issue, we propose the General Assembly Task Model, which provides two standardized baseline tasks and a noise-free way of evaluating Augmented Reality instructions for assembly tasks. Further, based on the experience, we gained from applying our assistive system in real-world assembly scenarios, we identify eight guidelines for designing assistive systems for the workplace. In conclusion, this thesis provides a basis for understanding how in-situ projection can be used for providing cognitive support at workplaces. It identifies the strengths and weaknesses of in-situ projection for cognitive assistance regarding different user groups. Therefore, the findings of this thesis contribute to the field of using Augmented Reality at the workplace. Overall, this thesis shows that using Augmented Reality for cognitively supporting workers during manual assembly tasks and order picking tasks creates a benefit for the workers when working on cognitively demanding tasks.Seit mehr als 20 Jahren wird Augmented Reality eingesetzt, um manuelle Montagetätigkeiten zu unterstützen. Durch neue Entwicklungen in der Sensortechnologie ist es möglich, kontextsensitive Augmented-Reality-Systeme zu bauen, die Interaktionen akkurat erkennen können. Zudem führen eine zunehmende Variantenvielfalt und die Möglichkeit, bestellte Produkte erst auf Nachfrage zu produzieren, zu einer zunehmenden Komplexität an Montagearbeitsplätzen. Der daraus entstehende Bedarf für kognitive Unterstützung an Arbeitsplätzen und die Verfügbarkeit von robuster Technologie lässt uns bestehende Probleme lösen, indem wir Arbeitende während Montagearbeiten mithilfe von kontextsensitiver Augmented Reality unterstützen. In dieser Arbeit erforschen wir, wie Assistenztechnologie eingesetzt werden kann, um Arbeitende in Produktionsszenarien kognitiv zu unterstützen. Mithilfe des User-Centered-Design-Prozess identifizieren wir Schlüsselanforderungen für Assistenzsysteme, die sowohl Arbeitende kontinuierlich unterstützen als auch Arbeitenden Arbeitsschritte beibringen können. Dabei betrachten wir drei verschiedene Benutzergruppen: unerfahrene Arbeitende, erfahrene Arbeitende, und Arbeitende mit kognitiven Behinderungen. Auf Basis der erarbeiteten Schlüsselanforderungen entwerfen wir ein allgemeines Konzept für die Bereitstellung von kognitiver Assistenz an Arbeitsplätzen, welches in verschiedenen Szenarien angewandt werden kann. Wir präsentieren vier verschiedene Prototypen, in denen das vorgeschlagene Konzept implementiert wurde. Für die Prototypen verwenden wir eine Kombination von In-Situ-Projektion und Kameras, um Arbeitenden Feedback anzuzeigen und die Interaktionen der Arbeitenden am Arbeitsplatz zu erkennen. Zwei der Prototypen zielen auf ein manuelles Montageszenario ab, und zwei weitere Prototypen zielen auf ein Kommissionierszenario ab. Im manuellen Montageszenario wenden wir das Konzept an einem Einzelarbeitsplatz und einer Montagezelle, welche drei Einzelarbeitsplätze miteinander verbindet, an. Im Kommissionierszenario präsentieren wir einen Kommissionierwagen, der mithilfe von In-Situ-Projektion Informationen direkt ins Lager projiziert. Des Weiteren präsentieren wir einen tragbaren Prototypen, der anstatt der Umgebung den Arbeitenden mit Technologie ausstattet. Ein weiterer Beitrag dieser Arbeit ist die Erforschung der Vorteile der erstellten Prototypen durch Benutzerstudien mit erfahrenen Arbeitenden, unerfahrenen Arbeitenden und Arbeitende mit kognitiver Behinderung. Wir zeigen, dass eine Kontur-Visualisierung von In-Situ-Anleitungen die geeignetste Anleitungsform für Arbeitende mit kognitiven Behinderungen ist. Des Weiteren befähigen Kontur-basierte Anleitungen Arbeitende mit kognitiver Behinderung, an komplexeren Aufgaben zu arbeiten, welche bis zu 96 Arbeitsschritte beinhalten können. Für unerfahrene Arbeitende zeigen wir, dass sich eine Kombination von haptischem und visuellem Fehlerfeedback bewährt hat. Wir stellen einen Ansatz vor, der eine Programmierung von interaktiven Anleitungen durch Demonstration zulässt, und evaluieren ihn. Bezüglich der Langzeitwirkung von In-Situ-Anleitungen an manuellen Montagearbeitsplätzen zeigen wir, dass Anleitungen, die sich den kognitiven Bedürfnissen der Arbeitenden anpassen, geeignet sind, da ein kontinuierliches Präsentieren von Anleitungen einen negativen Einfluss auf die Arbeitsgeschwindigkeit von erfahrenen Arbeitenden sowohl als auch unerfahrenen Arbeitenden hat. Für das Szenario der Kommissionierung zeigen wir, dass die In-Situ-Anleitungen des Kommissionierwagens ein großes Potenzial haben, da sie zu einer schnelleren Arbeitsgeschwindigkeit führen als traditionelle Papieranleitungen. Schlussendlich führt der tragbare Prototyp zu einer subjektiv niedrigeren kognitiven Last. Während der Durchführung der Studien haben wir den Bedarf einer standardisierten Evaluierungsmethode von Augmented-Reality-Anleitungen erkannt. Deshalb schlagen wir das General Assembly Task Modell vor, welches zwei standardisierte Grundaufgaben und eine Methode zur störungsfreien Analyse von Augmented-Reality-Anleitungen für Montagearbeiten bereitstellt. Des Weiteren stellen wir auf Basis unserer Erfahrungen, die wir durch die Anwendung unseres Assistenzsystems in Montageszenarien gemacht haben, acht Richtlinien für das Gestalten von Montageassistenzsystemen vor. Zusammenfassend bietet diese Arbeit eine Basis für das Verständnis der Benutzung von In-Situ-Projektion zur Bereitstellung von kognitiver Montageassistenz. Diese Arbeit identifiziert die Stärken und Schwächen von In-Situ-Projektion für die kognitive Unterstützung verschiedener Benutzergruppen. Folglich tragen die Resultate dieser Arbeit zum Feld der Benutzung von Augmented Reality an Arbeitsplätzen bei. Insgesamt zeigt diese Arbeit, dass die Benutzung von Augmented Reality für die kognitive Unterstützung von Arbeitenden während kognitiv anspruchsvoller manueller Montagetätigkeiten und Kommissioniertätigkeiten zu einer schnelleren Arbeitsgeschwindigkeit führt
    corecore