10 research outputs found

    Defining CARE Properties Through Temporal Input Models

    Get PDF
    In this paper we show how it is possible to represent the CARE properties (complementarity, assignment, redundancy, equivalence) modelling the temporal relationships among inputs provided through different modalities. For this purpose we extended GestIT, which provides a declarative and compositional model for gestures, in order to support other modalities. The generic models for the CARE properties can be used for the input model design, but also for an analysis of the relationships between the different modalities included into an existing input model

    GestUI: A Model-driven Method and Tool for Including Gesture-based Interaction in User Interfaces

    Get PDF
    [EN] Among the technological advances in touch-based devices, gesture-based interaction have become a prevalent feature in many application domains. Information systems are starting to explore this type of interaction. As a result, gesture specifications are now being hard-coded by developers at the source code level that hinders their reusability and portability. Similarly, defining new gestures that reflect user requirements is a complex process. This paper describes a model-driven approach to include gesture-based interaction in desktop information systems. It incorporates a tool prototype that captures user-sketched multi-stroke gestures and transforms them into a model by automatically generating the gesture catalogue for gesture-based interaction technologies and gesture-based user interface source codes. We demonstrated our approach in several applications ranging from case tools to form-based information systems.This work was supported by SENESCYT and Universidad de Cuenca from Ecuador, and received financial support from Generalitat Valenciana under Project IDEO (PROMETEOII/2014/039).Parra-González, LO.; España Cubillo, S.; Pastor López, O. (2016). GestUI: A Model-driven Method and Tool for Including Gesture-based Interaction in User Interfaces. Complex Systems Informatics and Modeling Quarterly. 6:73-92. https://doi.org/10.7250/csimq.2016-6.05S7392

    Polyphony: Programming Interfaces and Interactions with the Entity-Component-System Model

    Get PDF
    International audienceThis paper introduces a new Graphical User Interface (GUI) and Interaction framework based on the Entity-Component-System model (ECS). In this model, interactive elements (Entities) are characterized only by their data (Components). Behaviors are managed by continuously running processes (Systems) which select entities by the Components they possess. This model facilitates the handling of behaviors and promotes their reuse. It provides developers with a simple yet powerful composition pattern to build new interactive elements with Components. It materializes interaction devices as Entities and interaction techniques as a sequence of Systems operating on them. We present Polyphony, an experimental toolkit implementing this approach, and discuss our interpretation of the ECS model in the context of GUIs programming

    A review of temporal aspects of hand gesture analysis applied to discourse analysis and natural conversation

    Get PDF
    Lately, there has been a\ud n increasing\ud interest in hand gesture analysis systems. Recent works have employed\ud pat\ud tern recognition techniques and have focused on the development of systems with more natural user\ud interfaces. These systems may use gestures to control interfaces or recognize sign language gestures\ud , which\ud can provide systems with multimodal interaction; o\ud r consist in multimodal tools to help psycholinguists to\ud understand new aspects of discourse analysis and to automate laborious tasks.\ud Gestures are characterized\ud by several aspects, mainly by movements\ud and sequence of postures\ud . Since data referring to move\ud ments\ud or\ud sequences\ud carry temporal information\ud , t\ud his paper presents a\ud literature\ud review\ud about\ud temporal aspects of\ud hand gesture analysis, focusing on applications related to natural conversation and psycholinguistic\ud analysis, using Systematic Literature Revi\ud ew methodology. In our results, we organized works according to\ud type of analysis, methods, highlighting the use of Machine Learning techniques, and applications.FAPESP 2011/04608-

    A Framework for Temporal Analysis of Sensor Data in Gesture Recognition

    Get PDF
    A framework that allows to analyze measurements from sensors over a sliding time window. This allows the user to integrate the events already recognized by the sensor, by defining and creating new events related to the properties of the time series coming from the sensor

    A Model-Based Approach for Gesture Interfaces

    Get PDF
    The description of a gesture requires temporal analysis of values generated by input sensors, and it does not fit well the observer pattern traditionally used by frameworks to handle the user’s input. The current solution is to embed particular gesture-based interactions into frameworks by notifying when a gesture is detected completely. This approach suffers from a lack of flexibility, unless the programmer performs explicit temporal analysis of raw sensors data. This thesis proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification of events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multi-touch gestures in iOS and full body gestures with Microsoft Kinect. In addition to the solution for the event granularity problem, this thesis discusses how to separate the definition of the gesture from the user interface behaviour using the proposed compositional approach. The gesture description meta-model has been integrated into MARIA, a model-based user interface description language, extending it with the description of full-body gesture interfaces

    Formalisierung gestischer Eingabe für Multitouch-Systeme

    Get PDF
    Die Mensch-Computer-Interaktion wird dank neuer Eingabemöglichkeiten jenseits von Tastatur und Maus reicher, vielseitiger und intuitiver. Durch den Verzicht auf zusätzliche Geräte beim Umgang mit Computern geht seitens der Eingabeverarbeitung jedoch eine erhöhte Komplexität einher: Die Programmierung gestischer Eingabe für Multitouch-Systeme ist in derzeitigen Frameworks abgesehen von den verfügbaren Standard-Gesten mit hohem Aufwand verbunden. Die entwickelte Gestenformalisierung für Multitouch (GeForMT) definiert eine domänenspezifische Sprache zur Beschreibung von Multitouch-Gesten. Statt wie verwandte Formalisierungsansätze detaillierte Filter für die Rohdaten zu definieren, bedient sich GeForMT eines bildhaften Ansatzes, um Gesten zu beschreiben. Die Konzeption von Gesten wird unterstützt, indem beispielsweise in einem frühen Stadium der Entwicklung Konflikte zwischen ähnlichen Gesten aufgedeckt werden. Die formalisierten Gesten lassen sich direkt in den Code einbetten und vereinfachen damit die Programmierung. Das zugrundeliegende Framework sorgt für die Verbindung zu den Algorithmen der Gestenerkennung. Die Übertragung des semiotischen Ansatzes zur Formalisierung auf andere Formen gestischer Eingabe wird abschließend diskutiert.:1 Einleitung 1.1 Motivation 1.2 Zielstellung und Abgrenzung 1.3 Aufbau der Arbeit 2 Interdisziplinäre Grundlagenbetrachtung 2.1 Semiotik 2.1.1 Begriffe und Zeichenklassen 2.1.2 Linguistik 2.1.3 Graphische Semiologie 2.1.4 Formgestaltung und Produktsprache 2.1.5 Interfacegestaltung 2.2 Gestenforschung 2.2.1 Kendons Kontinuum für Gesten 2.2.2 Taxonomien 2.2.3 Einordnung 2.3 Gestische Eingabe in der Mensch-Computer-Interaktion 2.3.1 Historische Entwicklung von Ein- und Ausgabetechnologien 2.3.2 Begreifbare Interaktion 2.3.3 Domänenspezifische Modellierung 2.4 Zusammenfassung 3 Verwandte Formalisierungsansätze 3.1 Räumliche Gesten 3.1.1 XML-Beschreibung mit der Behaviour Markup Language 3.1.2 Detektornetze in multimodalen Umgebungen 3.1.3 Gestenvektoren zur Annotation von Videos 3.1.4 Vergleich 3.2 Gesten im Sketching 3.2.1 Gestenfunktionen für Korrekturzeichen 3.2.2 Sketch Language zur Beschreibung von Skizzen 3.2.3 Domänenspezifische Skizzen mit LADDER 3.2.4 Vergleich 3.3 Flächige Gesten 3.3.1 Regelbasierte Definition mit Midas 3.3.2 Gesture Definition Language als Beschreibungssprache 3.3.3 Reguläre Ausdrücke von Proton 3.3.4 Gesture Interface Specification Language 3.3.5 Logische Formeln mit Framous 3.3.6 Gesture Definition Markup Language 3.3.7 Vergleich 3.4 Zusammenfassung 4 Semiotisches Modell zur Formalisierung 4.1 Phasen gestischer Eingabe 4.2 Syntax gestischer Eingabe 4.3 Semantik gestischer Eingabe 4.4 Pragmatik gestischer Eingabe 4.5 Zusammenfassung 5 Gestenformalisierung für Multitouch 5.1 Ausgangslage für die Konzeption 5.1.1 Ikonographische Einordnung flächiger Gesten 5.1.2 Voruntersuchung zur Programmierung flächiger Gesten 5.1.3 Anforderungskatalog für die Formalisierung 5.2 Semiotische Analyse flächiger Gesten 5.2.1 Syntax flächiger Gesten 5.2.2 Semantik flächiger Gesten 5.2.3 Pragmatik flächiger Gesten 5.3 Präzedenzfälle für die Formalisierung 5.3.1 Geschicklichkeit bei der Multitouch-Interaktion 5.3.2 Präzision bei flächigen Gesten 5.3.3 Kooperation in Multitouch-Anwendungen 5.4 Evaluation und Diskussion 5.4.1 Vergleich der Zeichenanzahl 5.4.2 Evaluation der Beschreibungsfähigkeit 5.4.3 Limitierungen und Erweiterungen 6 Referenzarchitektur 6.1 Analyse existierender Multitouch-Frameworks 6.2 Grundlegende Architekturkomponenten 6.2.1 Parser 6.2.2 Datenmodell 6.2.3 Gestenerkennung und Matching 6.2.4 Programmierschnittstelle 6.3 Referenzimplementierung für JavaScript 6.3.1 Komponenten der Bibliothek 6.3.2 Praktischer Einsatz 6.3.3 Gesteneditor zur bildhaften Programmierung 7 Praxisbeispiele 7.1 Analyse prototypischer Anwendungen 7.1.1 Workshop zur schöpferischen Zerstörung 7.1.2 Workshop zu semantischen Dimensionen 7.1.3 Vergleich 7.2 Abbildung von Maus-Interaktion auf flächige Gesten in DelViz 7.2.1 Datengrundlage und Suchkonzept 7.2.2 Silverlight-Implementierung von GeForMT 7.3 Flächige Gesten im 3D-Framework Bildsprache LiveLab 7.3.1 Komponentenarchitektur 7.3.2 Implementierung von GeForMT mit C++ 7.4 Statistik und Zusammenfassung 8 Weiterentwicklung der Formalisierung 8.1 Räumliche Gesten 8.1.1 Verwandte Arbeiten 8.1.2 Prototypischer Aufbau 8.1.3 Formalisierungsansatz 8.2 Substanzen des Alltags 8.2.1 Verwandte Arbeiten 8.2.2 Experimente mit dem Explore Table 8.2.3 Formalisierungsansatz 8.3 Elastische Oberflächen 8.3.1 Verwandte Arbeiten 8.3.2 Der Prototyp DepthTouch 8.3.3 Formalisierungsansatz 9 Zusammenfassung 9.1 Kapitelzusammenfassungen und Beiträge der Arbeit 9.2 Diskussion und Bewertung 9.3 Ausblick und zukünftige Arbeiten Anhang Vergleichsmaterial Formalisierungsansätze Fragebogen Nachbefragung Ablaufplan studentischer Workshops Grammatikdefinitionen Statistische Auswertung Gestensets Literatur Webreferenzen Eigene Veröffentlichungen Betreute studentische Arbeiten Abbildungsverzeichnis Tabellen Verzeichnis der Code-Beispiel

    Self-adaptive structure semi-supervised methods for streamed emblematic gestures

    Get PDF
    Although many researchers try to improve the level of machine intelligence, there is still a long way to achieve intelligence similar to what humans have. Scientists and engineers are continuously trying to increase the level of smartness of the modern technology, i.e. smartphones and robotics. Humans communicate with each other by using the voice and gestures. Hence, gestures are essential to transfer the information to the partner. To reach a higher level of intelligence, the machine should learn from and react to the human gestures, which mean learning from continuously streamed gestures. This task faces serious challenges since processing streamed data suffers from different problems. Besides the stream data being unlabelled, the stream is long. Furthermore, “concept-drift” and “concept evolution” are the main problems of them. The data of the data streams have several other problems that are worth to be mentioned here, e.g. they are: dynamically changed, presented only once, arrived at high speed, and non-linearly distributed. In addition to the general problems of the data streams, gestures have additional problems. For example, different techniques are required to handle the varieties of gesture types. The available methods solve some of these problems individually, while we present a technique to solve these problems altogether. Unlabelled data may have additional information that describes the labelled data more precisely. Hence, semi-supervised learning is used to handle the labelled and unlabelled data. However, the data size increases continuously, which makes training classifiers so hard. Hence, we integrate the incremental learning technique with semi-supervised learning, which enables the model to update itself on new data without the need of the old data. Additionally, we integrate the incremental class learning within the semi-supervised learning, since there is a high possibility of incoming new concepts in the streamed gestures. Moreover, the system should be able to distinguish among different concepts and also should be able to identify random movements. Hence, we integrate the novelty detection to distinguish between the gestures that belong to the known concepts and those that belong to unknown concepts. The extreme value theory is used for this purpose, which overrides the need of additional labelled data to set the novelty threshold and has several other supportive features. Clustering algorithms are used to distinguish among different new concepts and also to identify random movements. Furthermore, the system should be able to update itself on only the trusty assignments, since updating the classifier on wrongly assigned gesture affects the performance of the system. Hence, we propose confidence measures for the assigned labels. We propose six types of semi-supervised algorithms that depend on different techniques to handle different types of gestures. The proposed classifiers are based on the Parzen window classifier, support vector machine classifier, neural network (extreme learning machine), Polynomial classifier, Mahalanobis classifier, and nearest class mean classifier. All of these classifiers are provided with the mentioned features. Additionally, we submit a wrapper method that uses one of the proposed classifiers or ensemble of them to autonomously issue new labels to the new concepts and update the classifiers on the newly incoming information depending on whether they belong to the known classes or new classes. It can recognise the different novel concepts and also identify random movements. To evaluate the system we acquired gesture data with nine different gesture classes. Each of them represents a different order to the machine e.g. come, go, etc. The data are collected using the Microsoft Kinect sensor. The acquired data contain 2878 gestures achieved by ten volunteers. Different sets of features are computed and used in the evaluation of the system. Additionally, we used real data, synthetic data and public data as support to the evaluation process. All the features, incremental learning, incremental class learning, and novelty detection are evaluated individually. The outputs of the classifiers are compared with the original classifier or with the benchmark classifiers. The results show high performances of the proposed algorithms

    A compositional model for gesture definition

    No full text
    The description of a gesture requires temporal analysis of values generated by input sensors and does not fit well the observer pattern traditionally used by frameworks to handle user input. The current solution is to embed particular gesture-based interactions, such as pinch-to-zoom, into frameworks by notifying when a whole gesture is detected. This approach suffers from a lack of flexibility unless the programmer performs explicit temporal analysis of raw sensors data. This paper proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multitouch gestures on iOS and full body gestures with Microsoft Kinec
    corecore