86 research outputs found

    Software support for multitouch interaction: the end-user programming perspective

    Get PDF
    Empowering users with tools for developing multitouch interaction is a promising step toward the materialization of ubiquitous computing. This survey frames the state of the art of existing multitouch software development tools from an end-user programming perspective.This research has been partially funded by the EUFP7 project meSch (grant agreement 600851 and CREAx grant (Spanish Ministry of Economy and Competitivity TIN2014-56534-R

    A design pattern for multimodal and multidevice user interfaces

    Get PDF
    In this paper, we introduce the MVIC pattern for creating multidevice and multimodal interfaces. We discuss the advantages provided by introducing a new component to the MVC pattern for those interfaces which must adapt to different devices and modalities. The proposed solution is based on an input model defining equivalent and complementary sequence of inputs for the same interaction. In addition, we discuss Djestit, a javascript library which allows creating multidevice and multimodal input models for web applications, applying the aforementioned pattern. The library supports the integration of multiple devices (Kinect 2, Leap Motion, touchscreens) and different modalities (gestural, vocal and touch). Copyright is held by the owner/author(s)

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei piĂč comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilitĂ  innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di lĂ  dello schermo dei computer o degli smartphone. PoichĂ© l’interazione gestuale tangibile Ăš un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalitĂ  di interazione con il sistema di infotainment. Per il secondo campo di applicazione, Ăš stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, Ăš stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo

    Recognize multi-touch gestures by graph modeling and matching

    Get PDF
    International audienceExtract the features for a multi-touch gesture is difficult due to the complex temporal and motion relations between multiple trajectories. In this paper we present a new generic graph model to quantify the shape, temporal and motion information from multi-touch gesture. To make a comparison between graph, we also propose a specific graph matching method based on graph edit distance. Results prove that our graph model can be fruitfully used for multi-touch gesture pattern recognition purpose with the classifier of graph embedding and SVM

    A Model-Based Approach for Gesture Interfaces

    Get PDF
    The description of a gesture requires temporal analysis of values generated by input sensors, and it does not fit well the observer pattern traditionally used by frameworks to handle the user’s input. The current solution is to embed particular gesture-based interactions into frameworks by notifying when a gesture is detected completely. This approach suffers from a lack of flexibility, unless the programmer performs explicit temporal analysis of raw sensors data. This thesis proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification of events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multi-touch gestures in iOS and full body gestures with Microsoft Kinect. In addition to the solution for the event granularity problem, this thesis discusses how to separate the definition of the gesture from the user interface behaviour using the proposed compositional approach. The gesture description meta-model has been integrated into MARIA, a model-based user interface description language, extending it with the description of full-body gesture interfaces

    Multi-touch interaction for interface prototyping

    Get PDF
    Tese de mestrado integrado. Engenharia Informåtica e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    Formalisierung gestischer Eingabe fĂŒr Multitouch-Systeme

    Get PDF
    Die Mensch-Computer-Interaktion wird dank neuer Eingabemöglichkeiten jenseits von Tastatur und Maus reicher, vielseitiger und intuitiver. Durch den Verzicht auf zusĂ€tzliche GerĂ€te beim Umgang mit Computern geht seitens der Eingabeverarbeitung jedoch eine erhöhte KomplexitĂ€t einher: Die Programmierung gestischer Eingabe fĂŒr Multitouch-Systeme ist in derzeitigen Frameworks abgesehen von den verfĂŒgbaren Standard-Gesten mit hohem Aufwand verbunden. Die entwickelte Gestenformalisierung fĂŒr Multitouch (GeForMT) definiert eine domĂ€nenspezifische Sprache zur Beschreibung von Multitouch-Gesten. Statt wie verwandte FormalisierungsansĂ€tze detaillierte Filter fĂŒr die Rohdaten zu definieren, bedient sich GeForMT eines bildhaften Ansatzes, um Gesten zu beschreiben. Die Konzeption von Gesten wird unterstĂŒtzt, indem beispielsweise in einem frĂŒhen Stadium der Entwicklung Konflikte zwischen Ă€hnlichen Gesten aufgedeckt werden. Die formalisierten Gesten lassen sich direkt in den Code einbetten und vereinfachen damit die Programmierung. Das zugrundeliegende Framework sorgt fĂŒr die Verbindung zu den Algorithmen der Gestenerkennung. Die Übertragung des semiotischen Ansatzes zur Formalisierung auf andere Formen gestischer Eingabe wird abschließend diskutiert.:1 Einleitung 1.1 Motivation 1.2 Zielstellung und Abgrenzung 1.3 Aufbau der Arbeit 2 InterdisziplinĂ€re Grundlagenbetrachtung 2.1 Semiotik 2.1.1 Begriffe und Zeichenklassen 2.1.2 Linguistik 2.1.3 Graphische Semiologie 2.1.4 Formgestaltung und Produktsprache 2.1.5 Interfacegestaltung 2.2 Gestenforschung 2.2.1 Kendons Kontinuum fĂŒr Gesten 2.2.2 Taxonomien 2.2.3 Einordnung 2.3 Gestische Eingabe in der Mensch-Computer-Interaktion 2.3.1 Historische Entwicklung von Ein- und Ausgabetechnologien 2.3.2 Begreifbare Interaktion 2.3.3 DomĂ€nenspezifische Modellierung 2.4 Zusammenfassung 3 Verwandte FormalisierungsansĂ€tze 3.1 RĂ€umliche Gesten 3.1.1 XML-Beschreibung mit der Behaviour Markup Language 3.1.2 Detektornetze in multimodalen Umgebungen 3.1.3 Gestenvektoren zur Annotation von Videos 3.1.4 Vergleich 3.2 Gesten im Sketching 3.2.1 Gestenfunktionen fĂŒr Korrekturzeichen 3.2.2 Sketch Language zur Beschreibung von Skizzen 3.2.3 DomĂ€nenspezifische Skizzen mit LADDER 3.2.4 Vergleich 3.3 FlĂ€chige Gesten 3.3.1 Regelbasierte Definition mit Midas 3.3.2 Gesture Definition Language als Beschreibungssprache 3.3.3 RegulĂ€re AusdrĂŒcke von Proton 3.3.4 Gesture Interface Specification Language 3.3.5 Logische Formeln mit Framous 3.3.6 Gesture Definition Markup Language 3.3.7 Vergleich 3.4 Zusammenfassung 4 Semiotisches Modell zur Formalisierung 4.1 Phasen gestischer Eingabe 4.2 Syntax gestischer Eingabe 4.3 Semantik gestischer Eingabe 4.4 Pragmatik gestischer Eingabe 4.5 Zusammenfassung 5 Gestenformalisierung fĂŒr Multitouch 5.1 Ausgangslage fĂŒr die Konzeption 5.1.1 Ikonographische Einordnung flĂ€chiger Gesten 5.1.2 Voruntersuchung zur Programmierung flĂ€chiger Gesten 5.1.3 Anforderungskatalog fĂŒr die Formalisierung 5.2 Semiotische Analyse flĂ€chiger Gesten 5.2.1 Syntax flĂ€chiger Gesten 5.2.2 Semantik flĂ€chiger Gesten 5.2.3 Pragmatik flĂ€chiger Gesten 5.3 PrĂ€zedenzfĂ€lle fĂŒr die Formalisierung 5.3.1 Geschicklichkeit bei der Multitouch-Interaktion 5.3.2 PrĂ€zision bei flĂ€chigen Gesten 5.3.3 Kooperation in Multitouch-Anwendungen 5.4 Evaluation und Diskussion 5.4.1 Vergleich der Zeichenanzahl 5.4.2 Evaluation der BeschreibungsfĂ€higkeit 5.4.3 Limitierungen und Erweiterungen 6 Referenzarchitektur 6.1 Analyse existierender Multitouch-Frameworks 6.2 Grundlegende Architekturkomponenten 6.2.1 Parser 6.2.2 Datenmodell 6.2.3 Gestenerkennung und Matching 6.2.4 Programmierschnittstelle 6.3 Referenzimplementierung fĂŒr JavaScript 6.3.1 Komponenten der Bibliothek 6.3.2 Praktischer Einsatz 6.3.3 Gesteneditor zur bildhaften Programmierung 7 Praxisbeispiele 7.1 Analyse prototypischer Anwendungen 7.1.1 Workshop zur schöpferischen Zerstörung 7.1.2 Workshop zu semantischen Dimensionen 7.1.3 Vergleich 7.2 Abbildung von Maus-Interaktion auf flĂ€chige Gesten in DelViz 7.2.1 Datengrundlage und Suchkonzept 7.2.2 Silverlight-Implementierung von GeForMT 7.3 FlĂ€chige Gesten im 3D-Framework Bildsprache LiveLab 7.3.1 Komponentenarchitektur 7.3.2 Implementierung von GeForMT mit C++ 7.4 Statistik und Zusammenfassung 8 Weiterentwicklung der Formalisierung 8.1 RĂ€umliche Gesten 8.1.1 Verwandte Arbeiten 8.1.2 Prototypischer Aufbau 8.1.3 Formalisierungsansatz 8.2 Substanzen des Alltags 8.2.1 Verwandte Arbeiten 8.2.2 Experimente mit dem Explore Table 8.2.3 Formalisierungsansatz 8.3 Elastische OberflĂ€chen 8.3.1 Verwandte Arbeiten 8.3.2 Der Prototyp DepthTouch 8.3.3 Formalisierungsansatz 9 Zusammenfassung 9.1 Kapitelzusammenfassungen und BeitrĂ€ge der Arbeit 9.2 Diskussion und Bewertung 9.3 Ausblick und zukĂŒnftige Arbeiten Anhang Vergleichsmaterial FormalisierungsansĂ€tze Fragebogen Nachbefragung Ablaufplan studentischer Workshops Grammatikdefinitionen Statistische Auswertung Gestensets Literatur Webreferenzen Eigene Veröffentlichungen Betreute studentische Arbeiten Abbildungsverzeichnis Tabellen Verzeichnis der Code-Beispiel

    Extending domain-specific modeling editors with multi-touch interactions

    Full text link
    L'ingénierie dirigée par les modÚles (MDE) est une méthodologie d'ingénierie logiciel qui permet aux ingénieurs de définir des modÚles conceptuels pour un domaine spécifique. La MDE est supportée par des outils de modélisation, qui sont des éditeurs pour créer et manipuler des modÚles spécifiques au domaine. Cependant, l'état actuel de la pratique de ces éditeurs de modélisation offre des interactions utilisateur trÚs limitées, souvent restreintes à glisser-déposer en utilisant les mouvements de souris et les touches du clavier. Récemment, un nouveau cadre propose de spécifier explicitement les interactions utilisateur des éditeurs de modélisation. Dans cette thÚse, nous étendons ce cadre pour supporter les interactions multitouches lors de la modélisation. Nous proposons un catalogue initial de gestes multitouches pour offrir une variété de gestes tactiles utiles. Nous démontrons comment notre approche est applicable pour générer des éditeurs de modélisation. Notre approche permet des interactions plus naturelles pour l'utilisateur quand il effectue des tùches de modélisation types.Model-driven engineering (MDE) is a software engineering methodology that enables engineers to define conceptual models for a specific domain. Modeling is supported by modeling language workbenches, acting as editor to create and manipulate domain-specific models. However, the current state of practice of these modeling editors offers very limited user interactions, often restricted to drag-and-drop with mouse movement and keystrokes. Recently, a novel framework proposes to explicitly specify the user interactions of modeling editors. In this thesis, we extend this framework to support multi-touch interactions when modeling. We propose an initial set of multi-touch gesture catalog to offer a variety of useful touch gestures. We demonstrate how our approach is applicable for generating modeling editors. Our approach yields more natural user interactions to perform typical modeling tasks

    Adapting Multi-touch Systems to Capitalise on Different Display Shapes

    Get PDF
    The use of multi-touch interaction has become more widespread. With this increase of use, the change in input technique has prompted developers to reconsider other elements of typical computer design such as the shape of the display. There is an emerging need for software to be capable of functioning correctly with different display shapes. This research asked: ‘What must be considered when designing multi-touch software for use on different shaped displays?’ The results of two structured literature surveys highlighted the lack of support for multi-touch software to utilise more than one display shape. From a prototype system, observations on the issues of using different display shapes were made. An evaluation framework to judge potential solutions to these issues in multi-touch software was produced and employed. Solutions highlighted as being suitable were implemented into existing multi-touch software. A structured evaluation was then used to determine the success of the design and implementation of the solutions. The hypothesis of the evaluation stated that the implemented solutions would allow the applications to be used with a range of different display shapes in such a way that did not leave visual content items unfit for purpose. The majority of the results conformed to this hypothesis despite minor deviations from the designs of solutions being discovered in the implementation. This work highlights how developers, when producing multi-touch software intended for more than one display shape, must consider the issue of visual content items being occluded. Developers must produce, or identify, solutions to resolve this issue which conform to the criteria outlined in this research. This research shows that it is possible for multi-touch software to be made display shape independent
    • 

    corecore