694 research outputs found

    Mega - mobile multimodal extended games

    Get PDF
    Tese de mestrado em Engenharia Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012As aplicações de entretenimento móvel têm hoje em dia um papel importante e significativo no mercado de software, abrangendo um grupo variado de utilizadores. Tudo isto se deve ao repentino sucesso de dispositivos de interacção inovadora, como o Wiimote da Nintendo, o Move da Sony e o Kinect da Microsoft. Por sua vez estas técnicas de interacção multimodal têm sido exploradas para jogos móveis. A recente geração de dispositivos móveis vem equipada com uma grande variedade de sensores, para além dos óbvios como ecrã táctil e microfone. Existem ainda outros componentes interessantes como bússola digital, acelerómetros, sensores ópticos. Os dispositivos móveis são também utilizados como máquina fotográfica digital, agenda pessoal, assim como para ver videos e ouvir música, e claro, para jogar jogos. Olhar para os novos grupos de utilizadores e para as novas formas de jogar e incluir nos jogos formas de interacção novas, usando os atributos e potencialidades de novas plataformas e novas tecnologias é pois um assunto pungente e deveras desafiante. Com este trabalho pretende-se estudar e propor novas dimensões de jogo e interacção com plataformas móveis, sejam smartphones, sejam tablets, que se adequem às mais distintas comunidades de jogadores. Pretende-se sobretudo explorar modalidades alternativas como as baseadas no tacto e vibração, assim como no áudio, combinadas ou não com outras mais tradicionais de foro visual. Almeja-se ainda explorar jogos em grupo, à distância e co-localizados, encontrando e estudando novas formas de expressão em jogos clássicos e jogos inovadores que envolvam pequenos conjuntos de indivíduos. A ubiquidade inerente aos dispositivos móveis faz ainda com que se tenham que encontrar neste jogos de grupo formas de fluxo de jogo que sustentem saídas e entradas rápidas ou menos rápidas sem que ainda assim se perca o interesse e a motivação de jogar. Este trabalho iniciou-se com uma pesquisa intensiva de trabalho relacionado, sobre a área de jogos móveis e suas multimodalidades, passando consequentemente pela acessibilidade inerente, jogos em grupo e suas formas de comunicação e conexão, e por último dando especial atenção a jogos de puzzle, sendo o tipo de jogo focado neste trabalho. Seguidamente, foi efectuado o levantamento de requisitos e exploradas as opções de jogo e de interacção relativas a jogos de puzzle móveis multimodais. No âmbito deste estudo foram criados três pequenos jogos sobre um conceito comum: jogos de puzzle. A primeira aplicação contém três modalidades diferentes de jogo: uma visual, apresentando um jogo de puzzle de imagens baseado nos tradicionais; uma segunda auditiva, que recria o conceito de jogo através de música, tornando as peças em pequenas parcelas sonoras da música de tamanhos equivalentes; e a terceira háptica, criando deste modo um puzzle com peças de padrões vibratórios diferentes. A segunda aplicação recriou o mesmo conceito de jogo, puzzle, no modo audio, mas retirando toda a informação visual, apresentando simples formas de interacção. A terceira aplicação apresenta uma abordagem sobre os jogos em grupo, permitindo jogar puzzles visuais e de audio em dois modos distintos: cooperativo, onde os jogadores têm de jogar em equipa de forma a conseguir completar o puzzle; e competitiva, onde os jogadores são forçados a ser mais rápidos que o adversário de modo a poderem vencer. Todas estas aplicações permitem ao utilizador definir o tamanho do puzzle e o nível de dificuldade, assim como escolher as imagens e músicas que pretendem resolver em forma de puzzle. Foram conduzidos vários testes de utilizador, nomeadamente um para cada aplicação desenvolvida. Sobre a primeira aplicação vinte e quatro participantes jogaram puzzles visuais e auditivos, distribuídos equitativamente pelas modalidades. Deste modo, cada participante resolveu nove puzzles de imagem ou nove puzzles audio distintos. Neste primeiro estudo procurou descobrir-se as estratégias de resolução dos puzzles, procurando principalmente igualdades e diferenças entre os diferentes modos. Para o segundo estudo foi usada a segunda aplicação desenvolvida, e foram abrangidos novamente vinte e quatro utilizadores, doze dos quais sendo cegos. Cada participante resolveu três puzzles audio diferentes. Relativamente a este estudo, foi proposta uma comparação entre os modos estudados anteriormente, especialmente sobre o modo audio, uma vez que foi usado o mesmo procedimento. Para os utilizadores cegos o objectivo foi provar que seria possível criar um jogo divertido, desafiante e sobretudo acessível a partir de um conceito de jogo clássico. Para o último estudo, vinte e quatro participantes, organizados em pares, jogaram puzzles visuais e de audio em modo cooperativo e competitivo. Cada conjunto de participantes resolveu quatro puzzles, um para cada modo de jogo por cada tipo de puzzle, o que significa dois puzzles visuais, um competitivo e outro cooperativo, e dois puzzles audio, sendo também um cooperativo e outro competitivo. O objectivo mais uma vez foi procurar as estratégias de resolução, permitindo também a comparação com outros modos anteriormente estudados. Todos os jogos foram transformados em dados contendo todas as acções que cada jogador tomou durante a resolução do puzzle. Esses dados foram depois transformados em números específicos de forma a poderem ser analisados e discutidos. Os valores obtidos foram divididos em três grupos principais, as tentativas de colocação de peças, o número de ajudas, e o tempo de conclusão do puzzle. Em relação às tentativas de colocação de peças é possível identificar a ordem correspondente segundo três formas distintas, pela classificação do tipo de peças, pela disposição das peças na fita e pela ordem sequencial do puzzle. Os resultados do estudo mostram que uma mesma estratégia de resolução de puzzles é usada através de todos os modos estudados, os jogadores optam por resolver primeiro as zonas mais relevantes do puzzle, deixando as partes mais abstractas e confundíveis para o final. No entanto, parente novas modalidades de jogo, pequenas percentagens de utilizadores mostraram diferentes estratégias de resolução. Através das opiniões dos utilizadores é também possível afirmar que todas as aplicações desenvolvidas são jogáveis, divertidas e desafiantes. No final foi criado um conjunto de componentes reutilizáveis e um conjunto de parâmetros para a criação de novos jogos. Numa linha de trabalho futuro foram propostos vários objectivos interessantes que podem promover e reaproveitar o trabalho desenvolvido. Deste modo foi criado um jogo de puzzle baseado na primeira aplicação desenvolvida, mantendo os modos visual e audio, de forma a poder integrar no mercado de aplicações móveis, permitindo deste modo, um estudo em larga escala sobre os mesmos conceitos estudados neste trabalho. Foi também pensada a criação de um servidor centralizado, permitindo conter os resultados de todos os jogadores de forma a criar um ranking geral, podendo deste modo incentivar os jogadores a melhorar o seu desempenho, e ajudar a promover o próprio jogo. Outra alternativa passa por melhorar e aperfeiçoar o modo háptico, de forma a criar mais uma modalidade viável sobre o mesmo conceito de jogo, de forma a poder ser também estudada de forma equivalente. O puzzle para invisuais pode também ser melhorado e aperfeiçoado de forma a criar mais desafios através da inclusão dum modo háptio. E por fim, não menos importante, criar novas dimensões de jogo em grupo, permitindo jogar os modos cooperativo e competitivo em simultâneo, tendo por exemplo duas equipas de dois jogadores cada, a cooperar entre si para completar o puzzle, e de certa forma a competir contra a outra equipa para terminar primeiro e com melhores resultados. O objectivo seria, mais uma vez, estudar as estratégias usadas.Mobile entertainment applications have an important and significant role in the software market, covering a diverse group of users. All this is due to the sudden success of innovative interaction devices such as Nintendo’s Wiimote, Sony’s Move and Kinect’s Microsoft. On the other hand, these multimodal interaction techniques have been explored for mobile games. The latest generation of mobile devices is equipped with a wide variety of sensors, in addition to the obvious such as touch screen and microphone. There are other interesting components such as digital compass, accelerometers and optical sensors. Mobile devices are also used as a digital camera, personal organizer, to watch videos and listen to music, and of course, to play games. Looking for the new users groups and for the new ways to play the games and include new forms of interaction, using the attributes and capabilities of new platforms and new technologies is an issue as poignant and very challenging. This work aims to study and propose new dimensions of play and interaction with mobile platforms, whether smartphones or tablets, which suit most distinct communities of players. It is intended primarily to explore alternative modalities such as touch-based and vibratory, as well as audio based, combined or not with traditional visual ones. It also aims at exploring group games, spatially distributed and co-located, finding and studying new forms of expression in classic games and innovative games that involve small sets of individuals. The ubiquity inherent to mobile devices leads us to find input and output flows which support rapid or less rapid entry commands, without losing the interest and motivation to play. In addition to the design and implementation of three or four small game applications intended to create a set of reusable components and a set of guidelines for creating new games

    Understanding interaction mechanics in touchless target selection

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration

    Designing Accessible Nonvisual Maps

    Get PDF
    Access to nonvisual maps has long required special equipment and training to use; Google Maps, ESRI, and other commonly used digital maps are completely visual and thus inaccessible to people with visual impairments. This project presents the design and evaluation of an easy to use digital auditory map and 3D model interactive map. A co-design was also undertaken to discover tools for an ideal nonvisual navigational experience. Baseline results of both studies are presented so future work can improve on the designs. The user evaluation revealed that both prototypes were moderately easy to use. An ideal nonvisual navigational experience, according to these participants, consists of both an accurate turn by turn navigational system, and an interactive map. Future work needs to focus on the development of appropriate tools to enable this ideal experience

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Toward New Ecologies of Cyberphysical Representational Forms, Scales, and Modalities

    Get PDF
    Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with screen-based multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established mainstream interaction paradigms. This dissertation describes interfaces that support complementary information mediations, representational forms, and scales toward an ecology of systems embodying hybrid interaction modalities. I investigate systems combining tangible and multi-touch, as well as systems combining tangible and virtual reality interaction. For each of them, I describe work focusing on design and fabrication aspects, as well as work focusing on reproducibility, engagement, legibility, and perception aspects

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen? Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten

    Designing 3D scenarios and interaction tasks for immersive environments

    Get PDF
    In the world of today, immersive reality such as virtual and mixed reality, is one of the most attractive research fields. Virtual Reality, also called VR, has a huge potential to be used in in scientific and educational domains by providing users with real-time interaction or manipulation. The key concept in immersive technologies to provide a high level of immersive sensation to the user, which is one of the main challenges in this field. Wearable technologies play a key role to enhance the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. This project report presents an application study where the user interacts with virtual objects, such as grabbing objects, open or close doors and drawers while wearing a sensory cyberglove developed in our lab (Cyberglove-HT). Furthermore, it presents the development of a methodology that provides inertial measurement unit(IMU)-based gesture recognition. The interaction tasks and 3D immersive scenarios were designed in Unity 3D. Additionally, we developed an inertial sensor-based gesture recognition by employing an Long short-term memory (LSTM) network. In order to distinguish the effect of wearable technologies in the user experience in immersive environments, we made an experimental study comparing the Cyberglove-HT to standard VR controllers (HTC Vive Controller). The quantitive and subjective results indicate that we were able to enhance the immersive sensation and self embodiment with the Cyberglove-HT. A publication resulted from this work [1] which has been developed in the framework of the R&D project Human Tracking and Perception in Dynamic Immersive Rooms (HTPDI

    Multimodaalinen joustavuus mobiilissa tekstinsyöttötehtävässä

    Get PDF
    Mobiili käytettävyys riippuu informaation määrästä jonka käyttäjä pystyy tavoittamaan ja välittämään käyttöliittymän avulla liikkeellä ollessaan. Informaation siirtokapasiteetti ja onnistunut siirto taas riippuvat siitä, kuinka joustavasti käyttöliittymää voi käyttää erilaisissa mobiileissa käyttökonteksteissa. Multimodaalisen joustavuuden tutkimus on keskittynyt lähinnä modaliteettien hyödyntämistapoihin ja niiden integrointiin käyttöliittymiin. Useimmat evaluoivat tutkimukset multimodaalisen joustavuuden alueella mittaavat vuorovaikutusten vaikutuksia toisiinsa. Kuitenkin ongelmana on, että ensinnäkään käyttöliittymän suorituksen arviointi tietyssä kontekstissa ei yleisty muihin mahdollisiin konteksteihin, ja toiseksi, suorituksen vertaaminen tilanteeseen jossa kahta tehtävää suoritetaan samanaikaisesti, paljastaa ennemminkin tehtävien välillä vallitsevan tasapainoilun, kuin itse vuorovaikutusten vaikutukset. Vastatakseen näihin ongelmiin multimodaalisen joustavuuden mittaamisessa, tämä diplomityö eristää modaliteettien hyödyntämisen vaikutuksen vuorovaikutuksessa mobiilin käyttöliittymän kanssa. Samanaikaisten, toissijaisten tehtävien sijaan modaliteettien hyödyntämisen mahdollisuus suljetaan kokonaan vuorovaikutuksesta. Multimodaalisen joustavuuden arvioinnin metodia [1] käytettiin tutkimuksessa osoittamaan kolmen aistikanavan (näön, kuulon ja tunnon) käyttöasteita mobiilissa tekstinsyöttötehtävässä kolmella laitteella; ITU-12 näppäimistöllä, sekä fyysisellä ja kosketusnäytöllisellä Qwerty -näppäimistöllä. Työn tavoitteena oli määrittää näiden käyttöliittymien multimodaalinen joustavuus ja yksittäisten aistikanavien arvo vuorovaikutukselle, sekä tutkia aistien yhteistoimintaa tekstinsyöttötehtävässä. Tutkimuksen tulokset osoittavat, että huolimatta ITU-12 näppäimistön hitaudesta kirjoittaa häiriöttömässä tilassa, sillä on ylivertainen mukautumiskyky toimia erilaisten häiriöiden vaikuttaessa, kuten oikeissa mobiileissa konteksteissa. Kaikki käyttöliittymät todettiin hyvin riippuvaisiksi näöstä. Qwerty -näppäimistöjen suoriutuminen heikkeni yli 80% kun näkö suljettiin vuorovaikutukselta. ITU-12 oli vähiten riippuvainen näöstä, suorituksen heiketessä noin 50 %. Aistikanavien toiminnan tarkastelu tekstinsyöttötehtävässä vihjaa, että näkö ja tunto toimivat yhdessä lisäten suorituskykyä jopa enemmän kuin käytettynä erikseen. Auraalinen palaute sen sijaan ei näyttänyt tuovan lisäarvoa vuorovaikutukseen lainkaan.The mobile usability of an interface depends on the amount of information a user is able to retrieve or transmit while on the move. Furthermore, the information transmission capacity and successful transmissions depend on how flexibly usable the interface is across varying real world contexts. Major focus in research of multimodal flexibility has been on facilitation of modalities to the interface. Most evaluative studies have measured effects that the interactions cause to each other. However, assessing these effects under a limited number of conditions does not generalize to other possible conditions in the real world. Moreover, studies have often compared single-task conditions to dual-tasking, measuring the trade-off between the tasks, not the actual effects the interactions cause. To contribute to the paradigm of measuring multimodal flexibility, this thesis isolates the effect of modality utilization in the interaction with the interface; instead of using a secondary task, modalities are withdrawn from the interaction. The multimodal flexibility method [1] was applied in this study to assess the utilization of three sensory modalities (vision, audition and tactition) in a text input task with three mobile interfaces; a 12-digit keypad, a physical Qwerty-keyboard and a touch screen virtual Qwerty-keyboard. The goal of the study was to compare multimodal flexibility of these interfaces, assess the values of utilized sensory modalities to the interaction, and examine the cooperation of modalities in a text input task. The results imply that the alphabetical 12-digit keypad is the multimodally most flexible of the three compared interfaces. Although the 12-digit keypad is relatively inefficient to type when all modalities are free to be allocated to the interaction, it is the most flexible in performing under constraints that the real world might set on sensory modalities. In addition, all the interfaces are shown to be highly dependent on vision. The performance of both Qwerty-keyboards dropped by approximately 80% as a result of withdrawing the vision from the interaction, and the performance of ITU-12 suffered approximately 50%. Examining cooperation of the modalities in the text input task, vision was shown to work in synergy with tactition, but audition did not provide any extra value for the interaction
    corecore