22 research outputs found
Construct-A-Vis : exploring the free-form visualization processes of children
Funding: UK EPSRC and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 – TRR 161 (Project C01).Building data analysis skills is part of modern elementary school curricula. Recent research has explored how to facilitate children's understanding of visual data representations through completion exercises which highlight links between concrete and abstract mappings. This approach scaffolds visualization activities by presenting a target visualization to children. But how can we engage children in more free-form visual data mapping exercises that are driven by their own mapping ideas? How can we scaffold a creative exploration of visualization techniques and mapping possibilities? We present Construct-A-Vis, a tablet-based tool designed to explore the feasibility of free-form and constructive visualization activities with elementary school children. Construct-A-Vis provides adjustable levels of scaffolding visual mapping processes. It can be used by children individually or as part of collaborative activities. Findings from a study with elementary school children using Construct-A-Vis individually and in pairs highlight the potential of this free-form constructive approach, as visible in children's diverse visualization outcomes and their critical engagement with the data and mapping processes. Based on our study findings we contribute insights into the design of free-form visualization tools for children, including the role of tool-based scaffolding mechanisms and shared interactions to guide visualization activities with children.PostprintPeer reviewe
Relaxed forced choice improves performance of visual quality assessment methods
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In image quality assessment, a collective visual quality score for an image or video is obtained from the individual ratings of many subjects. One commonly used format for these experiments is the two-alternative forced choice method. Two stimuli with the same content but differing visual quality are presented sequentially or side-by-side. Subjects are asked to select
the one of better quality, and when uncertain, they are required to guess. The relaxed alternative forced choice format aims to
reduce the cognitive load and the noise in the responses due to the guessing by providing a third response option, namely, “not sure”. This work presents a large and comprehensive crowdsourcing experiment to compare these two response formats: the one
with the “not sure” option and the one without it. To provide unambiguous ground truth for quality evaluation, subjects were
shown pairs of images with differing numbers of dots and asked each time to choose the one with more dots. Our crowdsourcing
study involved 254 participants and was conducted using a within-subject design. Each participant was asked to respond to
40 pair comparisons with and without the “not sure” response option and completed a questionnaire to evaluate their cognitive
load for each testing condition. The experimental results show that the inclusion of the “not sure” response option in the forced
choice method reduced mental load and led to models with better data fit and correspondence to ground truth. We also tested for
the equivalence of the models and found that they were different. The dataset is available at http://database.mmsp-kn.de/cogvqa-database.html
(pyrazolyl)borate-containing complexes of ruthenium and tungsten as covalent labels for biomolecules
In der vorliegenden Arbeit wurde die Synthese und Charakterisierung von (pyrazolyl)borat-haltigen (Tp) Komplexen des Rutheniums und Wolframs sowie ihre Verwendung in Peptidbiokonjugaten untersucht. Um Zugang zu funktionalisierbaren Tp'-Liganden zu erhalten, wurden die Ligandensalze -BrCHTpM (R=R'=H, M= Na, K, Rb, Cs, Tl; R=CH R'=H, M=K, Tl; R=R'=CH, M=K, Tl) und (-BrCHTp)Mg synthetisiert. Die Ruthenocen-Analoga (LX)Ru(-BrCHTp) (LX= Cp, Tp) und [TpmRu(-BrCHTp)]Cl wurden ausgehend von Ruthenium-Halbsandwichkomplexen dargestellt und durch anschliessende Einführung einer Carbonsäure-, beziehungsweise Azid-Gruppe funktionalisiert. Die Wolfram-haltigen Bausteine Tp*WI(CO)-alkin) (-alkin= Pentinsäure, Propargylglycin) wurden ausgehend von Tp*WI(CO) synthetisiert. Die funktionalisierten Ruthenium- und Wolfram-Komplexe wurden in der Synthese von Biokonjugaten der Peptide Enkephalin und -Neurotensin verwendet
Measuring Cognitive Load using Eye Tracking Technology in Visual Computing
In this position paper we encourage the use of eye tracking measurements to investigate users' cognitive load while interacting with a system. We start with an overview of how eye movements can be interpreted to provide insight about cognitive processes and present a descriptive model representing the relations of eye movements and cognitive load. Then, we discuss how specific characteristics of human-computer interaction (HCI) interfere with the model and impede the application of eye tracking data to measure cognitive load in visual computing. As a result, we present a refined model, embedding the characteristics of HCI into the relation of eye tracking data and cognitive load. Based on this, we argue that eye tracking should be considered as a valuable instrument to analyze cognitive processes in visual computing and suggest future research directions to tackle outstanding issues.publishe
Studying Eye Movements as a Basis for Measuring Cognitive Load
Users' cognitive load while interacting with a system is a valuable metric for evaluations in HCI. We encourage the analysis of eye movements as an unobtrusive and widely available way to measure cognitive load. In this paper, we report initial findings from a user study with 26 participants working on three visual search tasks that represent different levels of difficulty. Also, we linearly increased the cognitive demand while solving the tasks. This allowed us to analyze the reaction of individual eye movements to different levels of task difficulty. Our results show how pupil dilation, blink rate, and the number of fixations and saccades per second individually react to changes in cognitive activity. We discuss how these measurements could be combined in future work to allow for a comprehensive investigation of cognitive load in interactive settings.publishe
Employing Tangible Visualisations in Augmented Reality with Mobile Devices
Recent research has demonstrated the benefits of mixed realities for information visualisation. Often the focus lies on the visualisation itself, leaving interaction opportunities through different modalities largely unexplored. Yet, mixed reality in particular can benefit from a combination of different modalities. This work examines an existing mixed reality visualisation which is combined with a large tabletop for touch interaction. Although this allows for familiar operation, the approach comes with some limitations which we address by employing mobile devices, thus adding tangibility and proxemics as input modalities.publishe
Employing Tangible Visualisations in Augmented Reality with Mobile Devices
Recent research has demonstrated the benefits of mixed realities for information visualisation. Often the focus lies on the visualisation itself, leaving interaction opportunities through different modalities largely unexplored. Yet, mixed reality in particular can benefit from a combination of different modalities. This work examines an existing mixed reality visualisation which is combined with a large tabletop for touch interaction. Although this allows for familiar operation, the approach comes with some limitations which we address by employing mobile devices, thus adding tangibility and proxemics as input modalities.submitte
Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace
In recent years, research on cross-device interaction has become a popular topic in HCI leading to novel interaction techniques mutually interfering with new evolving theoretical paradigms. Building on previous research, we implemented an individual multi-device work environment for creative activities. In a study with 20 participants, we compared a traditional toolbar-based condition with two conditions facilitating spatially distributed tools on digital panels and on physical devices. We analyze participants’ interactions with the tools, encountered problems and corresponding solutions, as well as subjective task load and user experience. Our findings show that the spatial distribution of tools indeed offers advantages, but also elicits new problems, that can partly be leveraged by the physical affordances of mobile devices.publishe
Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality
Augmented Reality (AR) supported collaboration is a popular topic in HCI research. Previous work has shown the benefits of collaborative 3D object manipulation and identified two possibilities: Either separate or compose users’ inputs. However, their experimental comparison using handheld AR displays is still missing. We, therefore, conducted an experiment in which we tasked 24 dyads with collaboratively positioning virtual objects in handheld AR using three manipulation techniques: 1) Separation – performing only different manipulation tasks (i. e., translation or rotation) simultaneously, 2) Composition – performing only the same manipulation tasks simultaneously and combining individual inputs using a merge policy, and 3) Hybrid – performing any manipulation tasks simultaneously, enabling dynamic transitions between Separation and Composition. While all techniques were similarly effective, Composition was least efficient, with higher subjective workload and worse user experience. Preferences were polarized between clear work division (Separation) and freedom of action (Hybrid). Based on our findings, we offer research and design implications.publishe
IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies
Mobile intervention studies employ mobile devices to observe participants’ behavior change over several weeks. Researchers regularly monitor high-dimensional data streams to ensure data quality and prevent data loss (e.g., missing engagement or malfunctions). The multitude of problem sources hampers possible automated detection of such irregularities – providing a use case for interactive dashboards. With the advent of untethered head-mounted AR devices, these dashboards can be placed anywhere in the user's physical environment, leveraging the available space and allowing for flexible information arrangement and natural navigation. In this work, we present the user-centered design and the evaluation of IDIAR: Interactive Dashboards in AR, combining a head-mounted display with the familiar interaction of a smartphone. A user study with 15 domain experts for mobile intervention studies shows that participants appreciated the multimodal interaction approach. Based on our findings, we provide implications for research and design of interactive dashboards in AR.publishe