8 research outputs found

    An Accessible Approach to Exploring Space through Augmented Reality

    Get PDF
    Physically engaging with space is often difficult for people who struggle with mobility. Elderly people and people with disabilities in particular may find it challenging to walk for long periods of time on various terrain in order to explore their environment. This project is designed to provide an alternative way to physically engage with spaces without requiring the user to walk, and I am focusing on the accessibility of Bard’s campus specifically. My project involves a map of the college that users can tour in an augmented reality environment. Through the use of a projector-camera system, this program projects a map and tracks objects placed on that map. It tells the user information about the space based on the object’s location. Users are meant to collaboratively trace the map and label buildings as they explore them. Finally, users highlight their favorite locations with colored markers, and take a screenshot of the completed map. The colors used are associated with different subjective experiences of the campus and are projected back on the table in the final step of this project. This experience is meant to operate as an alternative to traditional physical tours while also maintaining the communal experience that Bard tours provide

    Augmented reality supported order picking using projected user interfaces

    Get PDF
    Order Picking is one of the most important tasks in modern warehouses. Since most work is still done manually, new methods to improve efficiency of the task are being researched. While the currently most used approaches Pick-by-Paper and Pick-by-Light are either prone to error or only scalable with high costs, other methods are considered. These methods include Pick-by-Vision systems based on Augmented Reality although these systems mostly rely on head-mounted displays. In order to evaluate a new method, we developed OrderPickAR which uses an order picking cart as well as projected user interfaces. OrderPickAR is part of the motionEAP project of the University of Stuttgart and relies on in-situ projection as well as motion recognition to guide the user and present feedback. The intuitive feedback provided by the in-situ projection as well as the motion recognition gives OrderPickAR the chance to effectivly eliminate errors while lowering the task completion time. With the use of a mobile workstation we also address the scalability of OrderPickAR. Since the developement is not sufficiant, we also conducted a study in which we compared OrderPickAR to currently used approaches. In addition we included a Pick-by-Vision approach developed in a related project by Sebastian Pickl. We analysed and compared different error types as well as the task completion time.Kommissionierung ist einer der wichtigsten Aufgaben in modernen LagerhĂ€usern. Da die meiste Arbeit noch immer manuell getĂ€tigt wird, werden neue Methoden zur Steigerung der Effizienz untersucht. WĂ€hrend die aktuell am meisten genutzten AnsĂ€tze, Pick-by-Paper und Pick-by-Light, entweder fehleranfĂ€llig oder nur unter hohen Kosten skalierbar sind, werden neue Methoden in Betracht gezogen. Diese Methoden schließen Pick-by-Vision Systeme basierend auf Augmended Reality ein, welche aber hauptsĂ€chlich auf den Nutzen von Head- Mounted Displays setzen. Um eine neue Methode zu untersuchen, haben wir OrderPickAR entwickelt, welches einen Kommissionierwagen und projizierte User Interfaces nutzt. Order-PickAR ist Teil des motionEAP Projekts der UniversitĂ€t Stuttgart und nutzt in-situ Projektion sowie Bewegungserkennung um den Nutzern zu leiten und Feedback zu prĂ€sentieren. Das intuitive Feedback der in-situ Projektion und die Bewegungserkennung geben OrderPickAR die Chance, Fehler auszumerzen, wĂ€hrend gleichzeitig die Bearbeitungszeit einer Aufgabe reduziert wird. Durch die Nutzung einer mobilen Arbeitsstation berĂŒcksichtigen wir außerdem die Skalierbarkeit von OrderPickAR. Da die Entwicklung alleine nicht ausreicht, haben wir zusĂ€tzlich eine Studie durchgefĂŒhrt, in welcher wir OrderPickAR mit aktuell genutzten Methoden verglichen haben. Außerdem haben wir einen Pick-by-Vision Ansatz, der von Sebastin Pickl in einem verwandten Projekt entwickelt wurde, in die Studie eingebunden. Wir haben unterschiedliche Fehlerarten und die Bearbeitungszeit untersucht und verglichen

    Exploring the design space of programming by demonstration with context aware assistive systems

    Get PDF
    Due to the recent development in head-mounted displays, projection systems, rising capabilities of Augmented Reality and availability of Kinect for depth images, the advance of assistive systems in industrial context began. We propose a system that not only provides methods to assist industrial workers in their everyday tasks by assisting and providing instructions and giving feedback about the executed tasks. But although present a system that provides the capability to record and therefore create instructions by demonstration. This tackles the problem of current assistive systems and the complexity editors that are used for creating these instructions. We conducted a study that involved the creating of instruction via different conditions that were performed by experts in the field of manual assembly. Furthermore we verified these instruction with 51 industrial workers that completed assembly tasks guided by the instruction. Our results indicated that interactive instructions created through Programming by Demonstration are equal to existing approaches. Additional qualitative feedback showed that instructions through Programming by Demonstration are generally well perceived.Durch die jĂŒngsten Entwicklungen im Feld der Head-Mounted Displays, Projektions Systeme, wachsende Möglichkeiten bezĂŒglich augmentierter RealitĂ€t und der VerfĂŒgbarkeit von Tiefendaten durch die Kinect Kamera, finden Assistenzsysteme immer mehr Anwendung in industriellen Bereichen. In dieser Arbeit wird ein System vorgestellt, welches nicht nur die Möglichkeit bietet, Arbeiter in der Industrie bei ihrer tĂ€glichen Arbeit, durch Anzeigen von Anleitungen und Feedback ĂŒber ausgefĂŒhrte Arbeitsschritte, zu unterstĂŒtzen. Des Weiteren wird ein System vorgestellt das ermöglicht, Arbeitsanleitungen durch Demonstration zu erstellen. Hierdurch wird das Problem aktueller Assistenzsysteme beseitigt, Anleitungen mit komplizierten Editoren erstellen zu mĂŒssen. Mit einer Studie wurde das Erstellen von Anleitungen ĂŒber verschiedene AnsĂ€tze ĂŒberprĂŒft. Die Studie wurde mit Experten der manuellen Montage durchgefĂŒhrt. Weiterhin wurden die erstellten Anleitungen von 51 Arbeitern aus der Industrie getestet, welche die Anleitungen zum erfĂŒllen eines Arbeitsvorganges genutzt haben. Die Ergebnisse weisen darauf hin, dass interaktive Anleitungen, die per Programmierung durch Demonstration erstellt wurden, gleichwertig gegenĂŒber herkömmlichen Anleitungen sind. Die Auswertung des qualitatives Feedbacks der Studie hat zusĂ€tzlich gezeigt, dass Anleitungen die per Programmierung durch Demonstration erstellt wurden, generell als positiv wahrgenommen werden

    User-defined interaction using everyday objects for augmented reality first-person action games.

    Get PDF
    This thesis covers research into the use of everyday objects as props in first-person augmented reality action games. The thesis aims to answer three research questions: ‱ RQ: Do more commonly chosen everyday objects provide a more immersive experience when used as props in a first-person augmented reality action game? – SQ1: Can a consensus be reached for what types of everyday objects are used as props in a first-person augmented reality action game? – SQ2: How can everyday objects be used as props in a first-person augmented reality action game? An elicitation study was performed to investigate these research questions. Participants in the study were offered a range of everyday objects that they could select as a prop to control a virtual sword, shield and then crossbow. Each participant completed a short game task with each virtual object, using their selection, filled in a questionnaire to measure their immersion and completed a short interview after all tasks were completed. Results from the study indicate that no, more commonly chosen everyday objects do not necessarily provide a more immersive experience when used as props in a first-person augmented reality action game - due to no significant differences found between immersion scores for the consensus and remaining objects. Yes, a consensus can be reached for what types of everyday objects are used as props in a first-person augmented reality action game but not necessarily for all virtual objects - the sword was found to have medium agreement with a shoehorn as the most popular choice, the shield was found to have high agreement with a pot-lid as the most popular choice and no consensus was found for the crossbow object. The qualitative results indicated that everyday objects can be used as props in a first-person augmented reality action game by providing intuitive ways to use the everyday objects that mimic how players would expect the virtual objects to be used and activated

    Augmented reality at the workplace : a context-aware assistive system using in-situ projection

    Get PDF
    Augmented Reality has been used for providing assistance during manual assembly tasks for more than 20 years. Due to recent improvements in sensor technology, creating context-aware Augmented Reality systems, which can detect interaction accurately, becomes possible. Additionally, the increasing amount of variants of assembled products and being able to manufacture ordered products on demand, leads to an increasing complexity for assembly tasks at industrial assembly workplaces. The resulting need for cognitive support at workplaces and the availability of robust technology enables us to address real problems by using context-aware Augmented Reality to support workers during assembly tasks. In this thesis, we explore how assistive technology can be used for cognitively supporting workers in manufacturing scenarios. By following a user-centered design process, we identify key requirements for assistive systems for both continuously supporting workers and teaching assembly steps to workers. Thereby, we analyzed three different user groups: inexperienced workers, experienced workers, and workers with cognitive impairments. Based on the identified requirements, we design a general concept for providing cognitive assistance at workplaces which can be applied to multiple scenarios. For applying the proposed concept, we present four prototypes using a combination of in-situ projection and cameras for providing feedback to workers and to sense the workers' interaction with the workplace. Two of the prototypes address a manual assembly scenario and two prototypes address an order picking scenario. For the manual assembly scenario, we apply the concept to a single workplace and an assembly cell, which connects three single assembly workplaces to each other. For the order picking scenario, we present a cart-mounted prototype using in-situ projection to display picking information directly onto the warehouse. Further, we present a user-mounted prototype, exploring the design-dimension of equipping the worker with technology rather than equipping the environment. Besides the system contribution of this thesis, we explore the benefits of the created prototypes through studies with inexperienced workers, experienced workers, and cognitively impaired workers. We show that a contour visualization of in-situ feedback is the most suitable for cognitively impaired workers. Further, these contour instructions enable the cognitively impaired workers to perform assembly tasks with a complexity of up to 96 work steps. For inexperienced workers, we show that a combination of haptic and visual error feedback is appropriate to communicate errors that were made during assembly tasks. For creating interactive instructions, we introduce and evaluate a Programming by Demonstration approach. Investigating the long-term use of in-situ instructions at manual assembly workplaces, we show that instructions adapting to the workers' cognitive needs is beneficial, as continuously presenting instructions has a negative impact on the performance of both experienced and inexperienced workers. In the order picking scenario, we show that the cart-mounted in-situ instructions have a great potential as they outperform the paper-baseline. Finally, the user-mounted prototype results in a lower perceived cognitive load. Over the course of the studies, we recognized the need for a standardized way of evaluating Augmented Reality instructions. To address this issue, we propose the General Assembly Task Model, which provides two standardized baseline tasks and a noise-free way of evaluating Augmented Reality instructions for assembly tasks. Further, based on the experience, we gained from applying our assistive system in real-world assembly scenarios, we identify eight guidelines for designing assistive systems for the workplace. In conclusion, this thesis provides a basis for understanding how in-situ projection can be used for providing cognitive support at workplaces. It identifies the strengths and weaknesses of in-situ projection for cognitive assistance regarding different user groups. Therefore, the findings of this thesis contribute to the field of using Augmented Reality at the workplace. Overall, this thesis shows that using Augmented Reality for cognitively supporting workers during manual assembly tasks and order picking tasks creates a benefit for the workers when working on cognitively demanding tasks.Seit mehr als 20 Jahren wird Augmented Reality eingesetzt, um manuelle MontagetĂ€tigkeiten zu unterstĂŒtzen. Durch neue Entwicklungen in der Sensortechnologie ist es möglich, kontextsensitive Augmented-Reality-Systeme zu bauen, die Interaktionen akkurat erkennen können. Zudem fĂŒhren eine zunehmende Variantenvielfalt und die Möglichkeit, bestellte Produkte erst auf Nachfrage zu produzieren, zu einer zunehmenden KomplexitĂ€t an MontagearbeitsplĂ€tzen. Der daraus entstehende Bedarf fĂŒr kognitive UnterstĂŒtzung an ArbeitsplĂ€tzen und die VerfĂŒgbarkeit von robuster Technologie lĂ€sst uns bestehende Probleme lösen, indem wir Arbeitende wĂ€hrend Montagearbeiten mithilfe von kontextsensitiver Augmented Reality unterstĂŒtzen. In dieser Arbeit erforschen wir, wie Assistenztechnologie eingesetzt werden kann, um Arbeitende in Produktionsszenarien kognitiv zu unterstĂŒtzen. Mithilfe des User-Centered-Design-Prozess identifizieren wir SchlĂŒsselanforderungen fĂŒr Assistenzsysteme, die sowohl Arbeitende kontinuierlich unterstĂŒtzen als auch Arbeitenden Arbeitsschritte beibringen können. Dabei betrachten wir drei verschiedene Benutzergruppen: unerfahrene Arbeitende, erfahrene Arbeitende, und Arbeitende mit kognitiven Behinderungen. Auf Basis der erarbeiteten SchlĂŒsselanforderungen entwerfen wir ein allgemeines Konzept fĂŒr die Bereitstellung von kognitiver Assistenz an ArbeitsplĂ€tzen, welches in verschiedenen Szenarien angewandt werden kann. Wir prĂ€sentieren vier verschiedene Prototypen, in denen das vorgeschlagene Konzept implementiert wurde. FĂŒr die Prototypen verwenden wir eine Kombination von In-Situ-Projektion und Kameras, um Arbeitenden Feedback anzuzeigen und die Interaktionen der Arbeitenden am Arbeitsplatz zu erkennen. Zwei der Prototypen zielen auf ein manuelles Montageszenario ab, und zwei weitere Prototypen zielen auf ein Kommissionierszenario ab. Im manuellen Montageszenario wenden wir das Konzept an einem Einzelarbeitsplatz und einer Montagezelle, welche drei EinzelarbeitsplĂ€tze miteinander verbindet, an. Im Kommissionierszenario prĂ€sentieren wir einen Kommissionierwagen, der mithilfe von In-Situ-Projektion Informationen direkt ins Lager projiziert. Des Weiteren prĂ€sentieren wir einen tragbaren Prototypen, der anstatt der Umgebung den Arbeitenden mit Technologie ausstattet. Ein weiterer Beitrag dieser Arbeit ist die Erforschung der Vorteile der erstellten Prototypen durch Benutzerstudien mit erfahrenen Arbeitenden, unerfahrenen Arbeitenden und Arbeitende mit kognitiver Behinderung. Wir zeigen, dass eine Kontur-Visualisierung von In-Situ-Anleitungen die geeignetste Anleitungsform fĂŒr Arbeitende mit kognitiven Behinderungen ist. Des Weiteren befĂ€higen Kontur-basierte Anleitungen Arbeitende mit kognitiver Behinderung, an komplexeren Aufgaben zu arbeiten, welche bis zu 96 Arbeitsschritte beinhalten können. FĂŒr unerfahrene Arbeitende zeigen wir, dass sich eine Kombination von haptischem und visuellem Fehlerfeedback bewĂ€hrt hat. Wir stellen einen Ansatz vor, der eine Programmierung von interaktiven Anleitungen durch Demonstration zulĂ€sst, und evaluieren ihn. BezĂŒglich der Langzeitwirkung von In-Situ-Anleitungen an manuellen MontagearbeitsplĂ€tzen zeigen wir, dass Anleitungen, die sich den kognitiven BedĂŒrfnissen der Arbeitenden anpassen, geeignet sind, da ein kontinuierliches PrĂ€sentieren von Anleitungen einen negativen Einfluss auf die Arbeitsgeschwindigkeit von erfahrenen Arbeitenden sowohl als auch unerfahrenen Arbeitenden hat. FĂŒr das Szenario der Kommissionierung zeigen wir, dass die In-Situ-Anleitungen des Kommissionierwagens ein großes Potenzial haben, da sie zu einer schnelleren Arbeitsgeschwindigkeit fĂŒhren als traditionelle Papieranleitungen. Schlussendlich fĂŒhrt der tragbare Prototyp zu einer subjektiv niedrigeren kognitiven Last. WĂ€hrend der DurchfĂŒhrung der Studien haben wir den Bedarf einer standardisierten Evaluierungsmethode von Augmented-Reality-Anleitungen erkannt. Deshalb schlagen wir das General Assembly Task Modell vor, welches zwei standardisierte Grundaufgaben und eine Methode zur störungsfreien Analyse von Augmented-Reality-Anleitungen fĂŒr Montagearbeiten bereitstellt. Des Weiteren stellen wir auf Basis unserer Erfahrungen, die wir durch die Anwendung unseres Assistenzsystems in Montageszenarien gemacht haben, acht Richtlinien fĂŒr das Gestalten von Montageassistenzsystemen vor. Zusammenfassend bietet diese Arbeit eine Basis fĂŒr das VerstĂ€ndnis der Benutzung von In-Situ-Projektion zur Bereitstellung von kognitiver Montageassistenz. Diese Arbeit identifiziert die StĂ€rken und SchwĂ€chen von In-Situ-Projektion fĂŒr die kognitive UnterstĂŒtzung verschiedener Benutzergruppen. Folglich tragen die Resultate dieser Arbeit zum Feld der Benutzung von Augmented Reality an ArbeitsplĂ€tzen bei. Insgesamt zeigt diese Arbeit, dass die Benutzung von Augmented Reality fĂŒr die kognitive UnterstĂŒtzung von Arbeitenden wĂ€hrend kognitiv anspruchsvoller manueller MontagetĂ€tigkeiten und KommissioniertĂ€tigkeiten zu einer schnelleren Arbeitsgeschwindigkeit fĂŒhrt

    Human and Artificial Intelligence

    Get PDF
    Although tremendous advances have been made in recent years, many real-world problems still cannot be solved by machines alone. Hence, the integration between Human Intelligence and Artificial Intelligence is needed. However, several challenges make this integration complex. The aim of this Special Issue was to provide a large and varied collection of high-level contributions presenting novel approaches and solutions to address the above issues. This Special Issue contains 14 papers (13 research papers and 1 review paper) that deal with various topics related to human–machine interactions and cooperation. Most of these works concern different aspects of recommender systems, which are among the most widespread decision support systems. The domains covered range from healthcare to movies and from biometrics to cultural heritage. However, there are also contributions on vocal assistants and smart interactive technologies. In summary, each paper included in this Special Issue represents a step towards a future with human–machine interactions and cooperation. We hope the readers enjoy reading these articles and may find inspiration for their research activities

    Toolkit support for interactive projected displays

    Get PDF
    Interactive projected displays are an emerging class of computer interface with the potential to transform interactions with surfaces in physical environments. They distinguish themselves from other visual output technologies, for instance LCD screens, by overlaying content onto the physical world. They can appear, disappear, and reconfigure themselves to suit a range of application scenarios, physical settings, and user needs. These properties have attracted significant academic research interest, yet the surrounding technical challenges and lack of application developer tools limit adoption to those with advanced technical skills. These barriers prevent people with different expertise from engaging, iteratively evaluating deployments, and thus building a strong community understanding of the technology in context. We argue that creating and deploying interactive projected displays should take hours, not weeks. This thesis addresses these difficulties through the construction of a toolkit that effectively facilitates user innovation with interactive projected displays. The toolkit’s design is informed by a review of related work and a series of in-depth research probes that study different application scenarios. These findings result in toolkit requirements that are then integrated into a cohesive design and implementation. This implementation is evaluated to determine its strengths, limitations, and effectiveness at facilitating the development of applied interactive projected displays. The toolkit is released to support users in the real-world and its adoption studied. The findings describe a range of real application scenarios, case studies, and increase academic understanding of applied interactive projected display toolkits. By significantly lowering the complexity, time, and skills required to develop and deploy interactive projected displays, a diverse community of over 2,000 individual users have applied the toolkit to their own projects. Widespread adoption beyond the computer-science academic community will continue to stimulate an exciting new wave of interactive projected display applications that transfer computing functionality into physical spaces
    corecore