33 research outputs found

    A student-facing dashboard for supporting sensemaking about the brainstorm process at a multi-surface space

    Full text link
    © 2017 Association for Computing Machinery. All rights reserved. We developed a student-facing dashboard tuned to support posthoc sensemaking in terms of participation and group effects in the context of collocated brainstorming. Grounding on foundations of small-group collaboration, open learner modelling and brainstorming at large interactive displays, we designed a set of models from behavioural data that can be visually presented to students. We validated the effectiveness of our dashboard in provoking group reflection by addressing two questions: (1) What do group members gain from studying measures of egalitarian contribution? and (2) What do group members gain from modelling how they sparked ideas off each other? We report on outcomes from a study with higher education students performing brainstorming. We present evidence from i) descriptive quantitative usage patterns; and ii) qualitative experiential descriptions reported by the students. We conclude the paper with a discussion that can be useful for the community in the design of collective reflection systems

    APPLICATIONS OF MULTI-TOUCH TABLETOP DISPLAYS AND THEIR CHALLENGING ISSUES: AN OVERVIEW

    Full text link

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    Brainstorming und Mind-Mapping im Multi-Device-Kontext. Konzeption und prototypische Implementierung für Multi-Touch-Tabletop und Smartphone

    Get PDF
    Die vorliegende Arbeit beschreibt die Konzeption und prototypische Implemen­tierung einer Anwendung zur elektronischen Unterstützung von Brainstorming- und Mind-Mapping-Sitzungen an einem multitouchfähigen Tabletop mit Smart­phones. Während der Tabletop durch seine große, horizontal ausgerichtete Oberfläche die kollaborative Erstellung und Strukturierung von Ideen in Gruppen unterstützt, werden Smartphones einen zusätzlichen Eingabekanal zur Verfügung stellen sowie Individualarbeit fördern. Somit wird zunächst einerseits die Motivation für die Verwirklichung einer Anwendung zum Brainstorming und Mind- Mapping am Tabletop und andererseits das Potential einer Ergänzung eines sol­chen Systems durch zusätzliche private Geräte im Sinne der Multi-Device-Inter­aktion erläutert werden. Um eine geeignete theoretische Basis zu schaffen, wird darüber hinaus ein detaillierter Überblick über die Entwicklung der Tabletop­technologie in den letzten Jahren und den dabei zentralen Forschungs- und Pro­blemfeldern im Kontext der Entwicklung von Benutzerschnittstellen für Tabletopsysteme gegeben. Zudem werden schließlich auch im Rahmen der Arbeit relevante Aspekte der Multi-Device-Interaktion skizziert. Diesem theoretisch ausgerichteten Teil der Arbeit folgt schließlich eine Beschreibung der Entwicklung des Anwendungskonzepts und der dabei formulierten Anforderungen und Designzielen, welche in Form eines ersten, papierbasierten Prototypen visualisiert werden. Aufbauend auf diesen konzeptuellen Überlegungen wird schließlich die konkrete technische Umsetzung der Anwendung Multi/Touch/Device Mind­ Mapper in Form eines High-Level-Prototypen auf Basis des MT4j-Frameworks (Tabletop) und des Android-Betriebssystems (Smartphone) beleuchtet. Eine Dis­kussion dieses finalen Prototypen und erster praktischer Erfahrungen sowie ein Ausblick auf Erweiterungsmöglichkeiten des Systems und über den Rahmen der Arbeit hinaus gehende Fragestellungen schließen die Arbeit ab. Schlussendlich kann gezeigt werden, dass die im Rahmen des Anwendungskonzept definierten Anforderungen mit Hilfe der verwendeten Frameworks bis auf wenige Ausnahmen erfolgreich umgesetzt werden konnten. Darüber hinaus kann der Anwendung durch die aus dem Testbetrieb gewonnenen Erkenntnisse eine grundsätzli­ che Praktikabilität attestiert, ebenso konnten einige Ansatzpunkte für weitere Verbesserungen und Tests aufgedeckt werden

    Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning

    Get PDF
    An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F-score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU.Publicad

    Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study

    Get PDF
    Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Clothing-based wearable interfaces are suitable for in-vehicle controls. They can combine various modalities to enable users to perform simple, natural, and efficient interactions while minimizing any negative effect on their driving. Research on clothing-based wearable in-vehicle interfaces is still underexplored. As such, there is a lack of understanding of how to use textile-based input for in-vehicle controls. As a first step towards filling this gap, we have conducted a user-elicitation study to involve users in the process of designing in-vehicle interactions via a fabric-based wearable device. We have been able to distill a taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface in a simulated driving setup. Our results help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions.</jats:p

    tCAD: a 3D modeling application on a depth enhanced tabletop computer

    Get PDF
    Tabletop computers featuring multi-touch input and object tracking are a common platform for research on Tangible User Interfaces (also known as Tangible Interaction). However, such systems are confined to sensing activity on the tabletop surface, disregarding the rich and relatively unexplored interaction canvas above the tabletop. This dissertation contributes with tCAD, a 3D modeling tool combining fiducial marker tracking, finger tracking and depth sensing in a single system. This dissertation presents the technical details of how these features were integrated, attesting to its viability through the design, development and early evaluation of the tCAD application. A key aspect of this work is a description of the interaction techniques enabled by merging tracked objects with direct user input on and above a table surface.Universidade da Madeir

    Touching 3D data:interactive visualization of cosmological simulations

    Get PDF
    Visualisatie richt zich op het aanvullen van de zintuigen en het voorstellingsvermogen van wetenschappers zodat ze hun gegevens beter kunnen begrijpen. Dit is een interactief en iteratief proces waar informatierepresentatie, interactieve verkenning en het nemen van beslissingen een grote rol spelen. Het doel is om door middel van dit iteratieve proces inzicht te verkrijgen in het probleem en de onderliggende gegevens totdat er voldoende begrip is. Bij deze visuele verkenning is een hoge mate van interactiviteit essentieel voor het op eem efficiënte manier behalen van dit doel omdat het de gebruiker dan de mogelijkheid geeft om nieuwe ideëen uit te proberen, terugkoppeling te verkrijgen en de verkenning op basis daarvan bij te sturen. In dit proefschrift rapporteren we over ons onderzoek naar de uitdagingen voor natuurlijke interactie en de verkenning van gegevens die gerepresenteerd worden in drie dimensies. In dit proefschrift hebben we de visualisatie van astronomische gegevens als centraal voorbeeld genomen van een toepassingsdomein waarin de nadruk ligt op op drie-dimensionale puntwolkgegevens van numerieke simulaties, zoals simulaties van galactische dynamica of hoog-dimensionale informatie uit deeltjessystemen. We introduceren twee intuïtieve en efficiënte interactie-technieken voor de verkenning van gegevens in drie dimensies. De eerste techniek helpt gebruikers te navigeren in drie dimensies en de tweede techniek geeft gebruikers de mogelijkheid om eenvoudig een subset van deeltjes te selecteren. Daarnaast integreren we deze twee technieken in een visueel-analytische computerapplicatie om zo wetenschappers te helpen nuttige informatie te extraheren en inzicht te verkrijgen
    corecore