62 research outputs found

    Development platform for elderly-oriented tabletop games

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    Visualization and interaction in a simulation system for flood emergencies

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia InformáticaThis thesis presents an interaction and visualization system for a river flood emergency simulation. It will also present a detailed study about forms of visual representation of critical elements in emergencies. All these elements are currently assembled in an application based on geographic information systems and agent simulation. Many of the goals in this thesis are interconnected with project Life-Saver. This project has the goal to develop an emergency response simulator, which needs a visualization and interaction system. The main goals of this thesis are, to create a visualization system for an emergency, to design an intuitive multimedia interface and to implement new forms of human-computer interaction. At the application level there is a representation of the simulation scenario with the multiple agent and their actions. Several studies were made to create an intuitive interface. New forms of multimedia interaction are studied and used such as interactive touch sensible boards and multi-touch panels. It is possible to load and retrieve geographic information on the scenario. The resulting architecture is used to visualize a simulation of an emergency flooding situation in a scenario where the Alqueva dam in Guadiana river fails

    Evaluating a tactile and a tangible multi-tablet gamified quiz system for collaborative learning in primary education

    Full text link
    [EN] Gamification has been identified as an interesting technique to foster collaboration in educational contexts. However, there are not many approaches that tackle this in primary school learning environments. The most popular technologies in the classroom are still traditional video consoles and desktop computers, which complicate the design of collaborative activities since they are essentially mono-user. The recent popularization of handheld devices such as tablets and smartphones has made it possible to build affordable, scalable, and improvised collaborative gamifled activities by creating a multi-tablet environment. In this paper we present Quizbot, a collaborative gamifled quiz application to practice different subjects, which can be defined by educators beforehand. Two versions of the system are implemented: a tactile for tablets laid on a table, in which all the elements are digital; and a tangible in which the tablets are scattered on the floor and the components are both digital and physical objects. Both versions of Quizbot are evaluated and compared in a study with eighty primary-schooled children in terms of user experience and quality of collaboration supported. Results indicate that both versions of Quizbot are essentially equally fun and easy to use, and can effectively support collaboration, with the tangible version outperforming the other one with respect to make the children reach consensus after a discussion, split and parallelize work, and treat each other with more respect, but also presenting a poorer time management.We would like to thank Universitat Politecnica de Valencia's Summer School for their collaboration during the development of this study, as well as Colegio Internacional Ausias March for their support in the development of educational content.This work is supported by Spanish Ministry of Economy and Competitiveness and funded by the European Development Regional Fund (EDRF-FEDER) with Project TIN2014-60077-R. It is also supported by fellowship ACIF/2014/214 within the VALi+d program from Conselleria d’Educació, Cultura i Esport (Generalitat Valenciana), and by fellowship FPU14/00136 within the FPU program from Spanish Ministry of Education, Culture, and SportGarcía Sanjuan, F.; El Jurdi, S.; Jaén Martínez, FJ.; Nácher-Soler, VE. (2018). Evaluating a tactile and a tangible multi-tablet gamified quiz system for collaborative learning in primary education. Computers & Education. 123:65-84. https://doi.org/10.1016/j.compedu.2018.04.011S658412

    Using natural user interfaces to support synchronous distributed collaborative work

    Get PDF
    Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW

    Bridging Private and Shared Interaction Surfaces in Collocated Groupware

    Get PDF
    Multi-display environments (such as the pairing of a digital tabletop computer with a set of handheld tablet computers) can support collocated interaction in groups by providing individuals with private workspaces that can be used alongside shared interaction surfaces. However, such a configuration necessitates the inclusion of intuitive and seamless interactions to move digital objects between displays. While existing research has suggested numerous methods to bridge devices in this manner, these methods often require highly specialized equipment and are seldom examined using real-world tasks. This thesis investigates the use of two cross-device object transfer methods as adapted for use with commonly-available hardware and applied for use in a realistic task, a familiar tabletop card game. A digital tabletop and tablet implementation of the tabletop card game Dominion is developed to support each of the two cross-device object transfer methods (as well as two different turn-taking methods to support user identification). An observational user study is then performed to examine the effect of the transfer methods on groups’ behaviour, examining player preferences and the strategies which players applied to pursue their varied goals within the game. The study reveals that players’ choices and use of the methods is shaped greatly by the way in which each player personally defines the Dominion task, not simply by the objectives outlined in its rulebook. Design considerations for the design of cross-device object transfer methods and lessons-learned for system and experimental design as applied to the gaming domain are also offered

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern

    APPLICATIONS OF MULTI-TOUCH TABLETOP DISPLAYS AND THEIR CHALLENGING ISSUES: AN OVERVIEW

    Full text link

    Flexible learning intinerary vs. linear learning itinerary

    Full text link
    The latest video game and entertainment technology and other technologies are facilitating the development of new and powerful e-Learning systems. In this paper, we present a computer-based game for learning about five historical ages. The objective of the game is to reinforce the events that mark the transition from one historical age to another and the order of the historical ages. Our game incorporates natural human-computer interaction based on video game technology, Frontal Projection, and personalized learning. For personalized learning, a Flexible Learning Itinerary has been included, where the children can decide how to direct the flow of their own learning process. For comparison, a Linear Learning Itinerary has also been included, where the children follow a determined learning flow. A study to compare the two different learning itineraries was carried out. Twenty nine children from 8 to 9 years old participated in the study. The analysis of the pre-tests and the post-tests determined that children learned the contents of a game about historical ages. The results show that there were no statistically significant differences between the two learning itineraries. Therefore, our study reveals the potential of computer-based learning games as a tool in the learning process for both flexible and linear itinerariesThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02-01).Martín San José, JF.; Juan Lizandra, MC.; Gil Gómez, JA.; Rando, N. (2014). Flexible learning intinerary vs. linear learning itinerary. Science of Computer Programming. 88:3-21. https://doi.org/10.1016/j.scico.2013.12.009S3218

    3-D Interfaces for Spatial Construction

    Get PDF
    It is becoming increasingly easy to bring the body directly to digital form via stereoscopic immersive displays and tracked input devices. Is this space a viable one in which to construct 3d objects? Interfaces built upon two-dimensional displays and 2d input devices are the current standard for spatial construction, yet 3d interfaces, where the dimensionality of the interactive space matches that of the design space, have something unique to offer. This work increases the richness of 3d interfaces by bringing several new tools into the picture: the hand is used directly to trace surfaces; tangible tongs grab, stretch, and rotate shapes; a handle becomes a lightsaber and a tool for dropping simple objects; and a raygun, analagous to the mouse, is used to select distant things. With these tools, a richer 3d interface is constructed in which a variety of objects are created by novice users with relative ease. What we see is a space, not exactly like the traditional 2d computer, but rather one in which a distinct and different set of operations is easy and natural. Design studies, complemented by user studies, explore the larger space of three-dimensional input possibilities. The target applications are spatial arrangement, freeform shape construction, and molecular design. New possibilities for spatial construction develop alongside particular nuances of input devices and the interactions they support. Task-specific tangible controllers provide a cultural affordance which links input devices to deep histories of tool use, enhancing intuition and affective connection within an interface. On a more practical, but still emotional level, these input devices frame kinesthetic space, resulting in high-bandwidth interactions where large amounts of data can be comfortably and quickly communicated. A crucial issue with this interface approach is the tension between specific and generic input devices. Generic devices are the tradition in computing -- versatile, remappable, frequently bereft of culture or relevance to the task at hand. Specific interfaces are an emerging trend -- customized, culturally rich, to date these systems have been tightly linked to a single application, limiting their widespread use. The theoretical heart of this thesis, and its chief contribution to interface research at large is an approach to customization. Instead of matching an application domain's data, each new input device supports a functional class. The spatial construction task is split into four types of manipulation: grabbing, pointing, holding, and rubbing. Each of these action classes spans the space of spatial construction, allowing a single tool to be used in many settings without losing the unique strengths of its specific form. Outside of 3d interface, outside of spatial construction, this approach strikes a balance between generic and specific suitable for many interface scenarios. In practice, these specific function groups are given versatility via a quick remapping technique which allows one physical tool to perform many digital tasks. For example, the handle can be quickly remapped from a lightsaber that cuts shapes to tools that place simple platonic solids, erase portions of objects, and draw double-helices in space. The contributions of this work lie both in a theoretical model of spatial interaction, and input devices (combined with new interactions) which illustrate the efficacy of this philosophy. This research brings the new results of Tangible User Interface to the field of Virtual Reality. We find a space, in and around the hand, where full-fledged haptics are not necessary for users physically connect with digital form.</p

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können
    • …
    corecore