19,475 research outputs found

    Mobile learning: benefits of augmented reality in geometry teaching

    Get PDF
    As a consequence of the technological advances and the widespread use of mobile devices to access information and communication in the last decades, mobile learning has become a spontaneous learning model, providing a more flexible and collaborative technology-based learning. Thus, mobile technologies can create new opportunities for enhancing the pupils’ learning experiences. This paper presents the development of a game to assist teaching and learning, aiming to help students acquire knowledge in the field of geometry. The game was intended to develop the following competences in primary school learners (8-10 years): a better visualization of geometric objects on a plane and in space; understanding of the properties of geometric solids; and familiarization with the vocabulary of geometry. Findings show that by using the game, students have improved around 35% the hits of correct responses to the classification and differentiation between edge, vertex and face in 3D solids.This research was supported by the Arts and Humanities Research Council Design Star CDT (AH/L503770/1), the Portuguese Foundation for Science and Technology (FCT) projects LARSyS (UID/EEA/50009/2013) and CIAC-Research Centre for Arts and Communication.info:eu-repo/semantics/publishedVersio

    Smart Geographic object: Toward a new understanding of GIS Technology in Ubiquitous Computing

    Get PDF
    One of the fundamental aspects of ubiquitous computing is the instrumentation of the real world by smart devices. This instrumentation constitutes an opportunity to rethink the interactions between human beings and their environment on the one hand, and between the components of this environment on the other. In this paper we discuss what this understanding of ubiquitous computing can bring to geographic science and particularly to GIS technology. Our main idea is the instrumentation of the geographic environment through the instrumentation of geographic objects composing it. And then investigate how this instrumentation can meet the current limitations of GIS technology, and offers a new stage of rapprochement between the earth and its abstraction. As result, the current research work proposes a new concept we named Smart Geographic Object SGO. The latter is a convergence point between the smart objects and geographic objects, two concepts appertaining respectively to

    Parametric Surfaces for Augmented Architecture representation

    Get PDF
    Augmented Reality (AR) represents a growing communication channel, responding to the need to expand reality with additional information, offering easy and engaging access to digital data. AR for architectural representation allows a simple interaction with 3D models, facilitating spatial understanding of complex volumes and topological relationships between parts, overcoming some limitations related to Virtual Reality. In the last decade different developments in the pipeline process have seen a significant advancement in technological and algorithmic aspects, paying less attention to 3D modeling generation. For this, the article explores the construction of basic geometries for 3D model’s generation, highlighting the relationship between geometry and topology, basic for a consistent normal distribution. Moreover, a critical evaluation about corrective paths of existing 3D models is presented, analysing a complex architectural case study, the virtual model of Villa del Verginese, an emblematic example for topological emerged problems. The final aim of the paper is to refocus attention on 3D model construction, suggesting some "good practices" useful for preventing, minimizing or correcting topological problems, extending the accessibility of AR to people engaged in architectural representation

    Mapping, sensing and visualising the digital co-presence in the public arena

    Get PDF
    This paper reports on work carried out within the Cityware project using mobile technologies to map, visualise and project the digital co-presence in the city. This paper focuses on two pilot studies exploring the Bluetooth landscape in the city of Bath. Here we apply adapted and ‘digitally augmented’ methods for spatial observation and analysis based on established methods used extensively in the space syntax approach to urban design. We map the physical and digital flows at a macro level and observe static space use at the micro level. In addition we look at social and mobile behaviour from an individual’s point of view. We apply a method based on intervention through ‘Sensing and projecting’ Bluetooth names and digital identity in the public arena. We present early findings in terms of patterns of Bluetooth flow and presence, and outline initial observations about how people’s reaction towards the projection of their Bluetooth names practices in public. In particular we note the importance of constructing socially meaningful relations between people mediated by these technologies. We discuss initial results and outline issues raised in detail before finally describing ongoing work

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, fĂŒhren zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhĂ€rent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natĂŒrliche Interaktionstechniken als hilfreich fĂŒr die Datenanalyse erwiesen. DarĂŒber hinaus spielt in solchen AnwendungsfĂ€llen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext fĂŒr die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung gefĂŒhrt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion fĂŒr diese oft komplexen Systeme. In meiner Dissertation beschĂ€ftige ich mich mit dieser Herausforderung, indem ich die Interaktion fĂŒr immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von rĂ€umlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann rĂ€umliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen fĂŒr immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. FĂŒr die zweite Frage untersuche ich, wie insbesondere die rĂ€umliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit rĂ€umlichen GerĂ€ten im Vergleich zur Touch-Eingabe, die Verwendung zusĂ€tzlicher mobiler GerĂ€te als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darĂŒber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie rĂ€umliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstĂŒtzen können

    Pervasive Displays Research: What's Next?

    Get PDF
    Reports on the 7th ACM International Symposium on Pervasive Displays that took place from June 6-8 in Munich, Germany

    Understanding Immersive Environments for Visual Data Analysis

    Get PDF
    Augmented Reality enables combining virtual data spaces with real-world environments through visual augmentations, transforming everyday environments into user interfaces of arbitrary type, size, and content. In the past, the development of Augmented Reality was mainly technology-driven. This made head-mounted Mixed Reality devices more common in research, industrial, or personal use cases. However, such devices are always human-centered, making it increasingly important to closely investigate and understand human factors within such applications and environments. Augmented Reality usage can reach from a simple information display to a dedicated device to present and analyze information visualizations. The growing data availability, amount, and complexity amplified the need and wish to generate insights through such visualizations. Those, in turn, can utilize human visual perception and Augmented Reality’s natural interactions, the potential to display three-dimensional data, or the stereoscopic display. In my thesis, I aim to deepen the understanding of how Augmented Reality applications must be designed to optimally adhere to human factors and ergonomics, especially in the area of visual data analysis. To address this challenge, I ground my thesis on three research questions: (1) How can we design such applications in a human-centered way? (2) What influence does the real-world environment have within such applications? (3) How can AR applications be combined with existing systems and devices? To answer those research questions, I explore different human properties and real-world environments that can affect the same environment’s augmentations. For human factors, I investigate the competence in working with visualizations as visualization literacy, the visual perception of visualizations, and physical ergonomics like head movement. Regarding the environment, I examine two main factors: the visual background’s influence on reading and working with immersive visualizations and the possibility of using alternative placement areas in Augmented Reality. Lastly, to explore future Augmented Reality systems, I designed and implemented Hybrid User Interfaces and authoring tools for immersive environments. Throughout the different projects, I used empirical, qualitative, and iterative methods in studying and designing immersive visualizations and applications. With that, I contribute to understanding how developers can apply human and environmental parameters for designing and creating future AR applications, especially for visual data analysis.Augmented Reality ermöglicht es, die reale Welt mit virtuellen DatenrĂ€ume durch visuelle Augmentierungen zu kombinieren. Somit werden alltĂ€gliche Umgebungen in BenutzeroberflĂ€chen beliebiger Art, GrĂ¶ĂŸe und beliebigen Inhalts verwandelt. In der Vergangenheit war die Entwicklung von Augmented Reality hauptsĂ€chlich technologiegetrieben. Folglich fanden head-mounted Mixed Reality GerĂ€te immer hĂ€ufiger in der Forschung, Industrie oder im privaten Bereich anwendung. Da die GerĂ€te jedoch immer auf den Menschen ausgerichtet sind, wird es immer wichtiger die menschlichen Faktoren in solchen Anwendungen und Umgebungen genau zu untersuchen. Die Nutzung von Augmented Reality kann von einer einfachen Informationsanzeige bis hin zur Darstellung und Analyse von Informationsvisualisierungen reichen. Die wachsende DatenverfĂŒgbarkeit, -menge und -komplexitĂ€t verstĂ€rkte den Bedarf und Wunsch, durch solche Visualisierungen Erkenntnisse zu gewinnen. Diese wiederum können die menschliche visuelle Wahrnehmung und die durch Augmented Reality bereitgestellte natĂŒrlichen Interaktion und die Darstellung dreidimensionale and stereoskopische Daten nutzen. In meiner Dissertation möchte ich das VerstĂ€ndnis dafĂŒr vertiefen, wie Augmented Reality-Anwendungen gestaltet werden mĂŒssen, um menschliche Faktoren und Ergonomie optimal zu berĂŒcksichtigen, insbesondere im Bereich der visuellen Datenanalyse. Hierbei stĂŒtze ich mich in meiner Arbeit auf drei Forschungsfragen: (1) Wie können solche Anwendungen menschenzentriert gestaltet werden? (2) Welchen Einfluss hat die reale Umgebung auf solche Anwendungen? (3) Wie können AR Anwendungen mit existierenden Systemen und GerĂ€ten kombiniert werden? Um diese Forschungsfragen zu beantworten, untersuche ich verschiedene menschliche und Umgebungseigenschaften, die sich auf die Augmentierungen derselben Umgebung auswirken können. FĂŒr menschliche Faktoren untersuche ich die Kompetenz im Umgang mit Visualisierungen als ``Visualization Literacy'', die visuelle Wahrnehmung von Visualisierungen, und physische Ergonomie wie Kopfbewegungen. In Bezug auf die Umgebung untersuche ich zwei Hauptfaktoren: den Einfluss des visuellen Hintergrunds auf das Lesen und Arbeiten mit immersiven Visualisierungen und die Möglichkeit der Verwendung alternativer Platzierungsbereiche in Augmented Reality. Um zukĂŒnftige Augmented Reality-Systeme zu erforschen, habe ich schließlich Hybride Benutzerschnittstellen und Konfigurationstools fĂŒr immersive Umgebungen entworfen und implementiert. WĂ€hrend der verschiedenen Projekte habe ich empirische, qualitative und iterative Methoden bei der Untersuchung und Gestaltung von immersiven Visualisierungen und Anwendungen eingesetzt. Damit trage ich zum VerstĂ€ndnis bei, wie Entwickler menschliche und umbebungsbezogene Parameter fĂŒr die Gestaltung und Erstellung zukĂŒnftiger AR-Anwendungen, insbesondere fĂŒr die visuelle Datenanalyse, nutzen können
    • 

    corecore