1,112 research outputs found

    The Effect of Augmented Reality Treatment on Learning, Cognitive Load, and Spatial Visualization Abilities

    Get PDF
    This study investigated the effects of Augmented Reality (AR) on learning, cognitive load and spatial abilities. More specifically, it measured learning gains, perceived cognitive load, and the role spatial abilities play with students engaged in an astronomy lesson about lunar phases. Research participants were 182 students from a public university in southeastern United States, and were recruited from psychology research pool. Participants were randomly assigned to two groups: (a) Augmented Reality and Text Astronomy Treatment (ARTAT); and (b) Images and Text Astronomy Treatment (ITAT). Upon entering the experimental classroom, participants were given (a) Paper Folding Test to measure their spatial abilities; (b) the Lunar Phases Concept Inventory (LPCI) pre-test; (c) lesson on Lunar Phases; (d) NASA-TLX to measure participants’ cognitive load; and (e) LPCI post-test. Statistical analysis found (a) no statistical difference for learning gains between the ARTAT and ITAT groups; (b) statistically significant difference for cognitive load; and (c) no significant difference for spatial abilities scores

    Exploring the potential of physical visualizations

    Get PDF
    The goal of an external representation of abstract data is to provide insights and convey information about the structure of the underlying data, therefore helping people execute tasks and solve problems more effectively. Apart from the popular and well-studied digital visualization of abstract data there are other scarcely studied perceptual channels to represent data such as taste, sound or haptic. My thesis focuses on the latter and explores in which ways human knowledge and ability to sense and interact with the physical non-digital world can be used to enhance the way in which people analyze and explore abstract data. Emerging technological progress in digital fabrication allow an easy, fast and inexpensive production of physical objects. Machines such as laser cutters and 3D printers enable an accurate fabrication of physical visualizations with different form factors as well as materials. This creates, for the first time, the opportunity to study the potential of physical visualizations in a broad range. The thesis starts with the description of six prototypes of physical visualizations from static examples to digitally augmented variations to interactive artifacts. Based on these explorations, three promising areas of potential for physical visualizations were identified and investigated in more detail: perception & memorability, communication & collaboration, and motivation & self-reflection. The results of two studies in the area of information recall showed that participants who used a physical bar chart retained more information compared to the digital counterpart. Particularly facts about maximum and minimum values were be remembered more efficiently, when they were perceived from a physical visualization. Two explorative studies dealt with the potential of physical visualizations regarding communication and collaboration. The observations revealed the importance on the design and aesthetic of physical visualizations and indicated a great potential for their utilization by audiences with less interest in technology. The results also exposed the current limitations of physical visualizations, especially in contrast to their well-researched digital counterparts. In the area of motivation we present the design and evaluation of the Activity Sculptures project. We conducted a field study, in which we investigated physical visualizations of personal running activity. It was discovered that these sculptures generated curiosity and experimentation regarding the personal running behavior as well as evoked social dynamics such as discussions and competition. Based on the findings of the aforementioned studies this thesis concludes with two theoretical contributions on the design and potential of physical visualizations. On the one hand, it proposes a conceptual framework for material representations of personal data by describing a production and consumption lens. The goal is to encourage artists and designers working in the field of personal informatics to harness the interactive capabilities afforded by digital fabrication and the potential of material representations. On the other hand we give a first classification and performance rating of physical variables including 14 dimensions grouped into four categories. This complements the undertaking of providing researchers and designers with guidance and inspiration to uncover alternative strategies for representing data physically and building effective physical visualizations.Um aus abstrakten Daten konkrete Aussagen, komplexe Zusammenhänge oder überraschende Einsichten gewinnen zu können, müssen diese oftmals in eine, für den Menschen, anschauliche Form gebracht werden. Eine weitverbreitete und gut erforschte Möglichkeiten ist die Darstellung von Daten in visueller Form. Weniger erforschte Varianten sind das Verkörpern von Daten durch Geräusche, Gerüche oder physisch ertastbare Objekte und Formen. Diese Arbeit konzentriert sich auf die letztgenannte Variante und untersucht wie die menschlichen Fähigkeiten mit der physischenWelt zu interagieren dafür genutzt werden können, das Analysieren und Explorieren von Daten zu unterstützen. Der technische Fortschritt in der digitalen Fertigung vereinfacht und beschleunigt die Produktion von physischen Objekten und reduziert dabei deren Kosten. Lasercutter und 3D Drucker ermöglichen beispielsweise eine maßgerechte Fertigung physischer Visualisierungen verschiedenster Ausprägungen hinsichtlich Größe und Material. Dadurch ergibt sich zum ersten Mal die Gelegenheit, das Potenzial von physischen Visualisierungen in größerem Umfang zu erforschen. Der erste Teil der Arbeit skizziert insgesamt sechs Prototypen physischer Visualisierungen, wobei sowohl statische Beispiele beschrieben werden, als auch Exemplare die durch digital Inhalte erweitert werden oder dynamisch auf Interaktionen reagieren können. Basierend auf den Untersuchungen dieser Prototypen wurden drei vielversprechende Bereiche für das Potenzial physischer Visualisierungen ermittelt und genauer untersucht: Wahrnehmung & Einprägsamkeit, Kommunikation & Zusammenarbeit sowie Motivation & Selbstreflexion. Die Ergebnisse zweier Studien zur Wahrnehmung und Einprägsamkeit von Informationen zeigten, dass sich Teilnehmer mit einem physischen Balkendiagramm an deutlich mehr Informationen erinnern konnten, als Teilnehmer, die eine digitale Visualisierung nutzten. Insbesondere Fakten über Maximal- und Minimalwerte konnten besser im Gedächtnis behalten werden, wenn diese mit Hilfe einer physischen Visualisierung wahrgenommen wurden. Zwei explorative Studien untersuchten das Potenzial von physischen Visualisierungen im Bereich der Kommunikation mit Informationen sowie der Zusammenarbeit. Die Ergebnisse legten einerseits offen wie wichtig ein ausgereiftes Design und die Ästhetik von physischen Visualisierungen ist, deuteten anderseits aber auch darauf hin, dass Menschen mit geringem Interesse an neuen Technologien eine interessante Zielgruppe darstellen. Die Studien offenbarten allerdings auch die derzeitigen Grenzen von physischen Visualisierungen, insbesondere im Vergleich zu ihren gut erforschten digitalen Pendants. Im Bereich der Motivation und Selbstreflexion präsentieren wir die Entwicklung und Auswertung des Projekts Activity Sculptures. In einer Feldstudie über drei Wochen erforschten wir physische Visualisierungen, die persönliche Laufdaten repräsentieren. Unsere Beobachtungen und die Aussagen der Teilnehmer ließen darauf schließen, dass die Skulpturen Neugierde weckten und zum Experimentieren mit dem eigenen Laufverhalten einluden. Zudem konnten soziale Dynamiken entdeckt werden, die beispielsweise durch Diskussion aber auch Wettbewerbsgedanken zum Ausdruck kamen. Basierend auf den gewonnen Erkenntnissen durch die erwähnten Studien schließt diese Arbeit mit zwei theoretischen Beiträgen, hinsichtlich des Designs und des Potenzials von physischen Visualisierungen, ab. Zuerst wird ein konzeptionelles Framework vorgestellt, welches die Möglichkeiten und den Nutzen physischer Visualisierungen von persönlichen Daten veranschaulicht. Für Designer und Künstler kann dies zudem als Inspirationsquelle dienen, wie das Potenzial neuer Technologien, wie der digitalen Fabrikation, zur Darstellung persönlicher Daten in physischer Form genutzt werden kann. Des Weiteren wird eine initiale Klassifizierung von physischen Variablen vorgeschlagen mit insgesamt 14 Dimensionen, welche in vier Kategorien gruppiert sind. Damit vervollständigen wir unser Ziel, Forschern und Designern Inspiration und Orientierung zu bieten, um neuartige und effektvolle physische Visualisierungen zu erschaffen

    Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information

    Get PDF
    Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments. Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions

    Multimodal perception of histological images for persons blind or visually impaired

    Get PDF
    Currently there is no suitable substitute technology to enable blind or visually impaired (BVI) people to interpret visual scientific data commonly generated during lab experimentation in real time, such as performing light microscopy, spectrometry, and observing chemical reactions. This reliance upon visual interpretation of scientific data certainly impedes students and scientists that are BVI from advancing in careers in medicine, biology, chemistry, and other scientific fields. To address this challenge, a real-time multimodal image perception system is developed to transform standard laboratory blood smear images for persons with BVI to perceive, employing a combination of auditory, haptic, and vibrotactile feedbacks. These sensory feedbacks are used to convey visual information through alternative perceptual channels, thus creating a palette of multimodal, sensorial information. A Bayesian network is developed to characterize images through two groups of features of interest: primary and peripheral features. Causal relation links were established between these two groups of features. Then, a method was conceived for optimal matching between primary features and sensory modalities. Experimental results confirmed this real-time approach of higher accuracy in recognizing and analyzing objects within images compared to tactile images

    Pseudo-haptics survey: Human-computer interaction in extended reality & teleoperation

    Get PDF
    Pseudo-haptic techniques are becoming increasingly popular in human-computer interaction. They replicate haptic sensations by leveraging primarily visual feedback rather than mechanical actuators. These techniques bridge the gap between the real and virtual worlds by exploring the brain’s ability to integrate visual and haptic information. One of the many advantages of pseudo-haptic techniques is that they are cost-effective, portable, and flexible. They eliminate the need for direct attachment of haptic devices to the body, which can be heavy and large and require a lot of power and maintenance. Recent research has focused on applying these techniques to extended reality and mid-air interactions. To better understand the potential of pseudo-haptic techniques, the authors developed a novel taxonomy encompassing tactile feedback, kinesthetic feedback, and combined categories in multimodal approaches, ground not covered by previous surveys. This survey highlights multimodal strategies and potential avenues for future studies, particularly regarding integrating these techniques into extended reality and collaborative virtual environments.info:eu-repo/semantics/publishedVersio

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Development and Validation of a Hybrid Virtual/Physical Nuss Procedure Surgical Trainer

    Get PDF
    With continuous advancements and adoption of minimally invasive surgery, proficiency with nontrivial surgical skills involved is becoming a greater concern. Consequently, the use of surgical simulation has been increasingly embraced by many for training and skill transfer purposes. Some systems utilize haptic feedback within a high-fidelity anatomically-correct virtual environment whereas others use manikins, synthetic components, or box trainers to mimic primary components of a corresponding procedure. Surgical simulation development for some minimally invasive procedures is still, however, suboptimal or otherwise embryonic. This is true for the Nuss procedure, which is a minimally invasive surgery for correcting pectus excavatum (PE) – a congenital chest wall deformity. This work aims to address this gap by exploring the challenges of developing both a purely virtual and a purely physical simulation platform of the Nuss procedure and their implications in a training context. This work then describes the development of a hybrid mixed-reality system that integrates virtual and physical constituents as well as an augmentation of the haptic interface, to carry out a reproduction of the primary steps of the Nuss procedure and satisfy clinically relevant prerequisites for its training platform. Furthermore, this work carries out a user study to investigate the system’s face, content, and construct validity to establish its faithfulness as a training platform

    From passive tool holders to microsurgeons: safer, smaller, smarter surgical robots

    No full text
    • …
    corecore