9 research outputs found

    Touch or Touchless? Evaluating Usability of Interactive Displays for Persons with Autistic Spectrum Disorders

    Get PDF
    Interactive public displays have been exploited and studied for engaging interaction in several previous studies. In this context, applications have been focused on supporting learning or entertainment activities, specifically designed for people with special needs. This includes, for example, those with Autism Spectrum Disorders (ASD). In this paper, we present a comparison study aimed at understanding the difference in terms of usability, effectiveness, and enjoyment perceived by users with ASD between two interaction modalities usually supported by interactive displays: touch-based and touchless gestural interaction. We present the outcomes of a within-subject setup involving 8 ASD users (age 18-25 y.o., IQ 40-60), based on the use of two similar user interfaces, differing only by the interaction modality. We show that touch interaction provides higher usability level and results in more effective actions, although touchless interaction is more effective in terms of enjoyment and engagemen

    User Interfaces based on Touchless Hand Gestures for the Classroom: A survey

    Get PDF
    (Received: 2014/10/31 - Accepted: 2014/12/15)The proliferation of new devices to detect human movements has produced an increase in the use of interfaces based on touchless hand gestures. These kind of applications may also be used in classrooms. Although a lot of studies have been carried out, most of them are not focused on classrooms. Therefore, this paper presents a bibliographic review about related studies with the aim of organizing and relating them to the interface design for this type of scenario. This review discusses some related applications, how gestures performed by users are recognized, design aspects to consider, and some evaluation methods for this interaction style. Thus, this work may be a reference guide to both researchers and software designers to develop and use such applications in classrooms

    Body gestures recognition for human robot interaction

    Get PDF
    In this project, a solution for human gesture classification is proposed. The solution uses a Deep Learning model and is meant to be useful for non-verbal communication between humans and robots. The State-of-the-Art is researched in an effort to achieve a model ready to work with natural gestures without restrictions. The research will focus on the creation of a temPoral bOdy geSTUre REcognition model (POSTURE) that can recognise continuous gestures performed in real-life situations. The suggested model takes into account spatial and temporal components so as to achieve the recognition of more natural and intuitive gestures. In a first step, a framework extracts from all the images the corresponding landmarks for each of the body joints. Next, some data filtering techniques are applied with the aim of avoiding problems related with the data. Afterwards, the filtered data is input into an State-of-the-Art Neural Network. And finally, different neural network configurations and approaches are tested to find the optimal performance. The obtained outcome shows the research has been done in the right track and how, despite of the dataset problems found, even better results can be achievedObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructur

    Designing Touchless Gestural Interfaces for Public Displays

    Get PDF
    Nell\u2019ultimo decennio, molti autori hanno studiato la possibilit\ue0 di utilizzare le interfacce a gesti come strumento innovativo per supportare l\u2019interazione con i computer. Inoltre, le recenti innovazioni tecnologiche hanno permesso di installare display interattivi in ambienti privati e pubblici. Tuttavia, l\u2019interattivit\ue0 di tali display \ue8 spesso basata sull\u2019uso di touchscreen, mentre tecnologie come i dispositivi Kinect-like vengono adottate molto pi\uf9 raramente, soprattutto se si considera l\u2019ambito dei display pubblici. Al giorno d\u2019oggi, l\u2019opportunit\ue0 di studiare le interfacce touchless per i display pubblici \ue8 diventata concreta, e rappresenta il campo di studio di diversi ricercatori. L\u2019obiettivo principale di questa tesi \ue8 quello di descrivere e studiare i problemi legati alla progettazione e all\u2019implementazione di un\u2019interfaccia grafica dedicata all\u2019interazione touchless a gesti con display pubblici. Ci\uf2 implica la necessit\ue0 di superare alcuni problemi tipici, sia dei display pubblici (ad esempio, l\u2019interaction blindness e l\u2019usabilit\ue0 immediata), che delle interfacce touchless (per esempio, comunicare che l\u2019interattivit\ue0 \ue8 gestuale). La tesi, inoltre, include uno studio che analizza quanto la presenza dell\u2019Avatar possa influire sulle interazioni degli utenti, in termini di carico di lavoro percepito, e quanto essa sia in grado di incoraggiare le interazioni a due mani. Poich\ue9 ABaToGI \ue8 stata progettata per i display pubblici, l\u2019interfaccia \ue8 stata anche inclusa in un\u2019installazione pubblica per essere valutata sul campo. I risultati di questo studio (e di quelli precedenti) sono stati quindi riassunti al fine di sviluppare una serie di linee guida per lo sviluppo di nuove interfacce touchless a gesti basata sull\u2019uso di un Avatar. La tesi si conclude con alcuni spunti di ricerca per il futuro.In the last decade, many authors have investigated and studied touchless and gestural interactions as a novel tool for interacting with computers. Moreover, technological innovations have allowed for installations of interactive displays in private and public places. However, interactivity is usually implemented by touchscreens, whereas technologies able to recognize body gestures are more rarely adopted, especially in integration with commercial public displays. Nowadays, the opportunity to investigate touchless interfaces for such systems has become concrete and studied by many researchers. Indeed, this interaction modality offers the possibility to overcome several issues that cannot be solved by touch-based solutions, e.g. keeping a high hygiene level of the screen surface, as well as providing big displays with interactive capabilities. The main goal of this thesis is to describe the design process for implementing touchless gestural interfaces for public displays. This implies the need for overcoming several typical issues of both public displays (e.g. interaction blindness, immediate usability) and touchless interfaces (e.g. communicating touchless interactivity). To this end, a novel Avatar-based Touchless Gestural Interface (or ABaToGI) has been developed, and its design process is described in the thesis, along with the user studies conducted for its evaluation. Moreover, the thesis analyzes how the presence of the Avatar may affect user interactions in terms of perceived cognitive workload, and if it may be able to foster bimanual interactions. Then, as ABaToGI was designed for public displays, it has been installed in an actual deployment in order to be evaluated in-the-wild (i.e. not in a lab setting). The resulting outcomes, along with the previously described studies, have been used to introduce a set of design guidelines for developing future touchless gestural interfaces, with a particular focus on Avatar-based ones. The results of this thesis provide also a basis for future research, which concludes this work

    Enhancing touchless interaction with the Leap Motion using a haptic glove

    Get PDF

    Designing Touchless Gestural Interfaces for Public Displays

    Get PDF
    Nell’ultimo decennio, molti autori hanno studiato la possibilità di utilizzare le interfacce a gesti come strumento innovativo per supportare l’interazione con i computer. Inoltre, le recenti innovazioni tecnologiche hanno permesso di installare display interattivi in ambienti privati e pubblici. Tuttavia, l’interattività di tali display è spesso basata sull’uso di touchscreen, mentre tecnologie come i dispositivi Kinect-like vengono adottate molto più raramente, soprattutto se si considera l’ambito dei display pubblici. Al giorno d’oggi, l’opportunità di studiare le interfacce touchless per i display pubblici è diventata concreta, e rappresenta il campo di studio di diversi ricercatori. L’obiettivo principale di questa tesi è quello di descrivere e studiare i problemi legati alla progettazione e all’implementazione di un’interfaccia grafica dedicata all’interazione touchless a gesti con display pubblici. Ciò implica la necessità di superare alcuni problemi tipici, sia dei display pubblici (ad esempio, l’interaction blindness e l’usabilità immediata), che delle interfacce touchless (per esempio, comunicare che l’interattività è gestuale). La tesi, inoltre, include uno studio che analizza quanto la presenza dell’Avatar possa influire sulle interazioni degli utenti, in termini di carico di lavoro percepito, e quanto essa sia in grado di incoraggiare le interazioni a due mani. Poiché ABaToGI è stata progettata per i display pubblici, l’interfaccia è stata anche inclusa in un’installazione pubblica per essere valutata sul campo. I risultati di questo studio (e di quelli precedenti) sono stati quindi riassunti al fine di sviluppare una serie di linee guida per lo sviluppo di nuove interfacce touchless a gesti basata sull’uso di un Avatar. La tesi si conclude con alcuni spunti di ricerca per il futuro.In the last decade, many authors have investigated and studied touchless and gestural interactions as a novel tool for interacting with computers. Moreover, technological innovations have allowed for installations of interactive displays in private and public places. However, interactivity is usually implemented by touchscreens, whereas technologies able to recognize body gestures are more rarely adopted, especially in integration with commercial public displays. Nowadays, the opportunity to investigate touchless interfaces for such systems has become concrete and studied by many researchers. Indeed, this interaction modality offers the possibility to overcome several issues that cannot be solved by touch-based solutions, e.g. keeping a high hygiene level of the screen surface, as well as providing big displays with interactive capabilities. The main goal of this thesis is to describe the design process for implementing touchless gestural interfaces for public displays. This implies the need for overcoming several typical issues of both public displays (e.g. interaction blindness, immediate usability) and touchless interfaces (e.g. communicating touchless interactivity). To this end, a novel Avatar-based Touchless Gestural Interface (or ABaToGI) has been developed, and its design process is described in the thesis, along with the user studies conducted for its evaluation. Moreover, the thesis analyzes how the presence of the Avatar may affect user interactions in terms of perceived cognitive workload, and if it may be able to foster bimanual interactions. Then, as ABaToGI was designed for public displays, it has been installed in an actual deployment in order to be evaluated in-the-wild (i.e. not in a lab setting). The resulting outcomes, along with the previously described studies, have been used to introduce a set of design guidelines for developing future touchless gestural interfaces, with a particular focus on Avatar-based ones. The results of this thesis provide also a basis for future research, which concludes this work

    A Model-Based Approach for Gesture Interfaces

    Get PDF
    The description of a gesture requires temporal analysis of values generated by input sensors, and it does not fit well the observer pattern traditionally used by frameworks to handle the user’s input. The current solution is to embed particular gesture-based interactions into frameworks by notifying when a gesture is detected completely. This approach suffers from a lack of flexibility, unless the programmer performs explicit temporal analysis of raw sensors data. This thesis proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification of events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multi-touch gestures in iOS and full body gestures with Microsoft Kinect. In addition to the solution for the event granularity problem, this thesis discusses how to separate the definition of the gesture from the user interface behaviour using the proposed compositional approach. The gesture description meta-model has been integrated into MARIA, a model-based user interface description language, extending it with the description of full-body gesture interfaces

    Prototyping tools for hybrid interactions

    Get PDF
    In using the term 'hybrid interactions', we refer to interaction forms that comprise both tangible and intangible interactions as well as a close coupling of the physical or embodied representation with digital output. Until now, there has been no description of a formal design process for this emerging research domain, no description that can be followed during the creation of these types of interactions. As a result, designers face limitations in prototyping these systems. In this thesis, we share our systematic approach to envisioning, prototyping, and iteratively developing these interaction forms by following an extended interaction design process. We share our experiences with process extensions in the form of toolkits, which we built for this research and utilized to aid designers in the development of hybrid interactive systems. The proposed tools incorporate different characteristics and are intended to be used at different points in the design process. In Sketching with Objects, we describe a low-fdelity toolkit that is intended to be used in the very early phases of the process, such as ideation and user research. By introducing Paperbox, we present an implementation to be used in the mid-process phases for fnding the appropriate mapping between physical representation and digital content during the creation of tangible user interfaces (TUI) atop interactive surfaces. In a follow-up project, we extended this toolkit to also be used in conjunction with capacitive sensing devices. To do this, we implemented Sketch-a-TUI. This approach allows designers to create TUIs on capacitive sensing devices rapidly and at low cost. To lower the barriers for designers using the toolkit, we created the Sketch-a-TUIApp, an application that allows even novice users (users without previous coding experience) to create early instantiations of TUIs. In order to prototype intangible interactions, we used open soft- and hardware components and proposed an approach of investigating interactivity in correlation with intangible interaction forms on a higher fdelity. With our fnal design process extension, Lightbox, we assisted a design team in systematically developing a remote interaction system connected to a media façade covering a building. All of the above-mentioned toolkits were explored both in real-life contexts and in projects with industrial partners. The evaluation was therefore mainly performed in the wild, which led to the adaptation of metrics suitable to the individual cases and contexts.Unter dem Sammelbegriff Hybrid Interactions verstehen wir Interaktionen, die physikalische oder immaterielle Bedienelemente einbeziehen. Diese Bezeichnung beinhaltet ausserdem eine enge Verbindung zwischen physikalischer oder verkörperter Interaktion und digitaler Darstellung der Nutzerschnittstelle. Es existiert jedoch kein allgemeingültiger Entwicklungsprozess den die mit der Gestaltung solcher Systeme betrauten Designer und Entwickler anwenden können. Eine Tatsache welche die systematische Entwicklung dieser neuartigen Interaktionsformen erschwert. In dieser Doktorarbeit präsentieren wir unseren Ansatz zur Erstellung hybrider Interaktionen mit der Hilfe von Designprozess-Werkzeugen. Unsere vorschlagen Werkzeuge können an verschiedenen Stellen im Design- Prozess eingesetzt zu werden: Mit Sketching with Objects präsentieren wir ein Werkzeug auf einer niedrigen Genauigkeitsstufe, das in sehr frühen Prozessphasen wie Ideenfndung und Nutzerforschung verwendet werden soll. Eine weitere Implementierung, Paperbox, bietet eine Methode für mittlere Designprozess-Phasen bei der Gestaltung von begreifbaren Interaktionen auf interaktiven Oberflächen. Im Verlauf unserer Forschungstätigkeit haben wir dieses Werkzeug erweitert, um auch in Verbindung mit graphischen, kapazitiven Oberflächen (z.B. iPad) verwendet werden zu können. Das für diesen Zweck erarbeitete Werkzeug Sketch-a-TUI ermöglicht Designern ein schnelles und kostengünstiges Entwerfen von interaktiven, physikalischen Objekten auf interaktiven Oberflächen. Für Nutzer ohne Programmierkenntnisse bietet die Sketch-a-TUIApp die Möglichkeit frühe Instanzen von begreifbaren Interaktionen selbständig zu erzeugen. Um hybride immaterielle Interaktionen systematisch zu gestalten, untersuchten wir die Verwendung von frei verfügbaren Soft- und Hardwarekomponenten. Durch diese Vorgehensweise stellen wir einen Ansatz zur prozessorientierten Erstellung von Prototypen in Verbindung mit immaterieller Interaktion vor. Ein weiteres Werkzeug für die Gestaltung von räumlich getrennten Interaktionen, Lightbox, unterstützte ein Designteam bei der Entwicklung einer räumlich getrennten (Nutzer-) Schnittstelle in Verbindung mit einer Medienfassade. Alle in dieser Doktorarbeit vorgestellten Werkzeuge wurden in Feldstudien durch Projekte mit Partnern aus der Industrie erforscht. Die Evaluation wurde daher hauptsächlich ausserhalb des Labors absolviert und resultierte in einer Anpassung der verwendeten Methoden im jeweiligen Kontext

    A experiência mediada por interfaces gestuais touchless em contexto turístico

    Get PDF
    Doutoramento em Informação e Comunicação em Plataformas DigitaisA evolução das Tecnologias da Informação e Comunicação impeliu novos modelos e estímulos para o sector do turismo. Estas mudanças, combinadas com uma nova postura do turista, repercutindo as dinâmicas da Web 2.0 e manifestando os contornos de uma cultura de participação, abriram espaço para o surgimento de novos serviços turísticos, de possível acesso ubíquo e personalizado ao longo de todo o ciclo da experiência turística. Simultaneamente, o surgimento de novos paradigmas de Interação Humano- Computador, de que são exemplo as interfaces gestuais touchless, acarretam oportunidades e desafios, quer ao nível da usabilidade e User Experience (UX), quer de um ponto de vista específico, quando concebida a sua potencial integração na experiência turística, como mais um veículo de consumo, partilha e manipulação de informação turística. A presente investigação, temporalmente, acompanhou o lançamento e sucesso do sensor Kinect, que aproximou e diversificou a aplicação e desenvolvimento de interfaces touchless em diferentes contextos. No âmbito turístico, foi identificado que a possível aplicação deste paradigma ainda não tinha sido explorado de forma detalhada. Verificava-se também a necessidade de contribuir para a definição de standards e estratégias para a exploração da UX em relação às interfaces gestuais touchless. Decorrendo da conjuntura apresentada, o presente estudo pretendeu focar a possível aplicação, potencialidades e experiência de utilização de soluções interativas com suporte de interação gestual touchless em contexto turístico. O estudo empírico desenhado e implementado envolveu dois momentos principais: a execução de entrevistas a experts e a realização de uma avaliação em contexto controlado de um protótipo de uma solução interativa touchless, destinada ao contexto turístico. A avaliação referida, na qual participaram 51 indivíduos, implicou o desenvolvimento de instrumentos e de um protocolo de teste adequado aos objetivos e características diferenciadoras do estudo. Como resultados gerais, o primeiro momento permitiu identificar um conjunto de vantagens e desvantagens, potencialidades e especificidades das interfaces gestuais touchless, quando concebida a sua aplicação ao turismo. O segundo momento, contando com o envolvimento dos participantes, destacou as questões relacionadas com a usabilidade e UX das interfaces touchless, permitindo estabelecer um conjunto de guias, metodologia e estratégias, que podem ser aplicadas no desenvolvimento e avaliação de outras soluções que suportem o paradigma referido. Recolheram-se ainda opiniões ao nível do potencial uso das mesmas em contexto turístico, identificadas no contributo dos utilizadores/participantes da avaliação em contexto controlado.The evolution of communication and information technologies drove new approaches in the tourism industry. This stimulus, combined with the new tourist behaviour, aware of Web 2.0 dynamics and participative in the social web culture, have provided new opportunities for new tourism services, with ubiquitous and personalized access during the entire cycle of the touristic experience. Also, the emergence of new human-computer interaction paradigms - such as touchless gestural interfaces - lead to challenges and opportunities in what concerns usability and user experience (UX). Furthermore, when integrated in the touristic experience, those interfaces may enhance information sharing and manipulation, adding a new dimension to how we experience tourism. This research aroused with the launch of the Kinect sensor, which allowed the application and development of touchless interfaces in different contexts. In tourism, the application of this paradigm has not yet been fully discussed. It was also relevant to contribute to the definition of standards and strategies for researching and evaluating the UX with touchless interfaces. Thus, this study intended to focus on the possible application, potentialities and UX resulting from using interactive solutions with touchless gestural interaction in tourism. The empirical study had two main stages: first, the performance of interviews with experts and second, the execution of an evaluation in a controlled setting, using a prototype of an interactive gestural touchless interface. This evaluation, which was attended by 51 participants, implied the development of suitable tools and evaluation protocol. As a result, the first stage enabled the identification of a set of advantages, disadvantages, possibilities and features of this type of interactive solutions. The second stage focused on the issues related to usability and user experience of touchless gestural interfaces, to establish a set of guidelines, methodologies and approaches. It also collected opinions from users about the application of touchless gestural interfaces in tourism
    corecore