7 research outputs found

    Lessons from digital puppetry - Updating a design framework for a perceptual user interface

    Get PDF
    While digital puppeteering is largely used just to augment full body motion capture in digital production, its technology and traditional concepts could inform a more naturalized multi-modal human computer interaction than is currently used with the new perceptual systems such as Kinect. Emerging immersive social media networks with their fully live virtual or augmented environments and largely inexperienced users would benefit the most from this strategy. This paper intends to define digital puppeteering as it is currently understood, and summarize its broad shortcomings based on expert evaluation. Based on this evaluation it will suggest updates and experiments using current perceptual technology and concepts in cognitive processing for existing human computer interaction taxonomy. This updated framework may be more intuitive and suitable in developing extensions to an emerging perceptual user interface for the general public

    An Evaluation of the Efficacy of a Perceptually Controlled Immersive Environment for Learning Acupuncture

    Get PDF
    This paper presents a basic but functional Perceptual User Interface (PUI) controlled immersive environment (IE) on an electronic learning platform (e-Learning) in order to deliver educational material relating to the NADA (National Acupuncture Detoxification Association) protocol for acupuncture. The purpose of this study is set out a proposed process for evaluating the learning efficacy of the PUI IE e-Learning application when compared with a typical Graphical User Interface (GUI) e-Learning IE application. Both are to be compared to a more traditional learning method. This paper evaluates user interface (UI) sentiment of the systems in advance of this proposed evaluation

    Immersive Storytelling in 360-Degree Videos: An Analysis of Interplay Between Narrative and Technical Immersion

    Get PDF
    Three-hundred-and-sixty-degree videos are an innovative video format, and due to various narrative and technical aspects, they allow audiences to be deeply immersed in their content. Through an explorative, qualitative content analysis (and parts of narrative analysis) aspects of immersion were explored in various 360-degree videos. Our results give an overview of multiple immersive factors in 360-degree storytelling and the interplay of narrative and technical aspects of immersion. Technical immersion manifests through cues to direct the viewer’s attention and cues to acknowledge the viewer as a part of the virtual environment. Narrative immersion, on the other hand, is influenced by the setting, as well as by the interplay of story, characters, and viewer integration. Our findings also indicate that narrative and technical aspects support each other to strengthen immersion

    Engaging immersive video consumers: Challenges regarding 360-degree gamified video applications

    Get PDF
    360-degree videos is a new medium that has gained the attention of the research community imposing challenges for creating more interactive and engaging immersive experiences. The purpose of this study is to introduce a set of technical and design challenges for interactive, gamified 360-degree mixed reality applications that immerse and engage users. The development of gamified applications refers to the merely incorporation of game elements in the interaction design process to attract and engage the user through playful interaction with the virtual world. The study presents experiments with the incorporation of series of game elements such as time pressure challenges, badges and user levels, storytelling narrative and immediate visual feedback to the interaction design logic of a mixed reality mobile gaming application that runs in an environment composed of 360-degree video and 3D computer generated objects. In the present study, the architecture and overall process for creating such an application is being presented along with a list of design implications and constraints. The paper concludes with future directions and conclusions on improving the level of immersion and engagement of 360-degree video consumers

    Tracking and modeling focus of attention in meetings [online]

    Get PDF
    Abstract This thesis addresses the problem of tracking the focus of attention of people. In particular, a system to track the focus of attention of participants in meetings is developed. Obtaining knowledge about a person\u27s focus of attention is an important step towards a better understanding of what people do, how and with what or whom they interact or to what they refer. In meetings, focus of attention can be used to disambiguate the addressees of speech acts, to analyze interaction and for indexing of meeting transcripts. Tracking a user\u27s focus of attention also greatly contributes to the improvement of human­computer interfaces since it can be used to build interfaces and environments that become aware of what the user is paying attention to or with what or whom he is interacting. The direction in which people look; i.e., their gaze, is closely related to their focus of attention. In this thesis, we estimate a subject\u27s focus of attention based on his or her head orientation. While the direction in which someone looks is determined by head orientation and eye gaze, relevant literature suggests that head orientation alone is a su#cient cue for the detection of someone\u27s direction of attention during social interaction. We present experimental results from a user study and from several recorded meetings that support this hypothesis. We have developed a Bayesian approach to model at whom or what someone is look­ ing based on his or her head orientation. To estimate head orientations in meetings, the participants\u27 faces are automatically tracked in the view of a panoramic camera and neural networks are used to estimate their head orientations from pre­processed images of their faces. Using this approach, the focus of attention target of subjects could be correctly identified during 73% of the time in a number of evaluation meet­ ings with four participants. In addition, we have investigated whether a person\u27s focus of attention can be pre­dicted from other cues. Our results show that focus of attention is correlated to who is speaking in a meeting and that it is possible to predict a person\u27s focus of attention based on the information of who is talking or was talking before a given moment. We have trained neural networks to predict at whom a person is looking, based on information about who was speaking. Using this approach we were able to predict who is looking at whom with 63% accuracy on the evaluation meetings using only information about who was speaking. We show that by using both head orientation and speaker information to estimate a person\u27s focus, the accuracy of focus detection can be improved compared to just using one of the modalities for focus estimation. To demonstrate the generality of our approach, we have built a prototype system to demonstrate focus­aware interaction with a household robot and other smart appliances in a room using the developed components for focus of attention tracking. In the demonstration environment, a subject could interact with a simulated household robot, a speech­enabled VCR or with other people in the room, and the recipient of the subject\u27s speech was disambiguated based on the user\u27s direction of attention. Zusammenfassung Die vorliegende Arbeit beschäftigt sich mit der automatischen Bestimmung und Ver­folgung des Aufmerksamkeitsfokus von Personen in Besprechungen. Die Bestimmung des Aufmerksamkeitsfokus von Personen ist zum Verständnis und zur automatischen Auswertung von Besprechungsprotokollen sehr wichtig. So kann damit beispielsweise herausgefunden werden, wer zu einem bestimmten Zeitpunkt wen angesprochen hat beziehungsweise wer wem zugehört hat. Die automatische Bestim­mung des Aufmerksamkeitsfokus kann desweiteren zur Verbesserung von Mensch-Maschine­Schnittstellen benutzt werden. Ein wichtiger Hinweis auf die Richtung, in welche eine Person ihre Aufmerksamkeit richtet, ist die Kopfstellung der Person. Daher wurde ein Verfahren zur Bestimmung der Kopfstellungen von Personen entwickelt. Hierzu wurden künstliche neuronale Netze benutzt, welche als Eingaben vorverarbeitete Bilder des Kopfes einer Person erhalten, und als Ausgabe eine Schätzung der Kopfstellung berechnen. Mit den trainierten Netzen wurde auf Bilddaten neuer Personen, also Personen, deren Bilder nicht in der Trainingsmenge enthalten waren, ein mittlerer Fehler von neun bis zehn Grad für die Bestimmung der horizontalen und vertikalen Kopfstellung erreicht. Desweiteren wird ein probabilistischer Ansatz zur Bestimmung von Aufmerksamkeits­zielen vorgestellt. Es wird hierbei ein Bayes\u27scher Ansatzes verwendet um die A­posterior iWahrscheinlichkeiten verschiedener Aufmerksamkteitsziele, gegeben beobachteter Kopfstellungen einer Person, zu bestimmen. Die entwickelten Ansätze wurden auf mehren Besprechungen mit vier bis fünf Teilnehmern evaluiert. Ein weiterer Beitrag dieser Arbeit ist die Untersuchung, inwieweit sich die Blickrich­tung der Besprechungsteilnehmer basierend darauf, wer gerade spricht, vorhersagen läßt. Es wurde ein Verfahren entwickelt um mit Hilfe von neuronalen Netzen den Fokus einer Person basierend auf einer kurzen Historie der Sprecherkonstellationen zu schätzen. Wir zeigen, dass durch Kombination der bildbasierten und der sprecherbasierten Schätzung des Aufmerksamkeitsfokus eine deutliche verbesserte Schätzung erreicht werden kann. Insgesamt wurde mit dieser Arbeit erstmals ein System vorgestellt um automatisch die Aufmerksamkeit von Personen in einem Besprechungsraum zu verfolgen. Die entwickelten Ansätze und Methoden können auch zur Bestimmung der Aufmerk­samkeit von Personen in anderen Bereichen, insbesondere zur Steuerung von comput­erisierten, interaktiven Umgebungen, verwendet werden. Dies wird an einer Beispielapplikation gezeigt

    Design methodology for 360-degree immersive video applications

    Get PDF
    360-degree immersive video applications for Head Mounted Display (HMD) devices offer great potential in providing engaging forms of experiential media solutions. Design challenges emerge though by this new kind of immersive media due to the 2D form of resources used for their construction, the lack of depth, the limited interaction, and the need to address the sense of presence. In addition, the use of Virtual Reality (VR) is related to cybersickness effects imposing further implications in moderate motion design tasks. This research project provides a systematic methodological approach in addressing those challenges and implications in 360-degree immersive video applications design. By studying and analysing methods and techniques efficiently used in the area of VR and Games design, a rigorous methodological design process is proposed. This process is introduced by the specification of the iVID (Immersive Video Interaction Design) framework. The efficiency of the iVID framework and the design methods and techniques it proposes is evaluated through two phases of user studies. Two different 360-degree immersive video prototypes have been created to serve the studies purposes. The analysis of the purposes of the studies ed to the definition of a set of design guidelines to be followed along with the iVID framework for designing 360-degree video-based experiences that are engaging and immersive

    Virtual Environments and Advanced Interfaces

    No full text
    Editorial for the Personal Ubiquitous Computing: Special Issue on Virtual Environments and Advanced Interface
    corecore