1,104 research outputs found

    Interaction Design for Digital Musical Instruments

    Get PDF
    The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model. Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon: 1. The accepted design conventions of the hardware in use 2. Established musical systems, acoustic or digital 3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware. This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician

    Multi-touch For General-purpose Computing An Examination Of Text Entry

    Get PDF
    In recent years, multi-touch has been heralded as a revolution in humancomputer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization – features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as everyday computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that generalpurpose computing relies on text input, and ask: Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing? We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by iv examining an alternate input technique. The third and fourth studies include mousestyle interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale

    Source Code Interaction on Touchscreens

    Get PDF
    Direct interaction with touchscreens has become a primary way of using a device. This work seeks to devise interaction methods for editing textual source code on touch-enabled devices. With the advent of the “Post-PC Era”, touch-centric interaction has received considerable attention in both research and development. However, various limitations have impeded widespread adoption of programming environments on modern platforms. Previous attempts have mainly been successful by simplifying or constraining conventional programming but have only insufficiently supported source code written in mainstream programming languages. This work includes the design, development, and evaluation of techniques for editing, selecting, and creating source code on touchscreens. The results contribute to text editing and entry methods by taking the syntax and structure of programming languages into account while exploiting the advantages of gesture-driven control. Furthermore, this work presents the design and software architecture of a mobile development environment incorporating touch-enabled modules for typical software development tasks

    Teaching Introductory Programming Concepts through a Gesture-Based Interface

    Get PDF
    Computer programming is an integral part of a technology driven society, so there is a tremendous need to teach programming to a wider audience. One of the challenges in meeting this demand for programmers is that most traditional computer programming classes are targeted to university/college students with strong math backgrounds. To expand the computer programming workforce, we need to encourage a wider range of students to learn about programming. The goal of this research is to design and implement a gesture-driven interface to teach computer programming to young and non-traditional students. We designed our user interface based on the feedback from students attending the College of Engineering summer camps at the University of Arkansas. Our system uses the Microsoft Xbox Kinect to capture the movements of new programmers as they use our system. Our software then tracks and interprets student hand movements in order to recognize specific gestures which correspond to different programming constructs, and uses this information to create and execute programs using the Google Blockly visual programming framework. We focus on various gesture recognition algorithms to interpret user data as specific gestures, including template matching, sector quantization, and supervised machine learning clustering algorithms

    Interactive Spaces Natural interfaces supporting gestures and manipulations in interactive spaces

    Get PDF
    This doctoral dissertation focuses on the development of interactive spaces through the use of natural interfaces based on gestures and manipulative actions. In the real world people use their senses to perceive the external environment and they use manipulations and gestures to explore the world around them, communicate and interact with other individuals. From this perspective the use of natural interfaces that exploit the human sensorial and explorative abilities helps filling the gap between physical and digital world. In the first part of this thesis we describe the work made for improving interfaces and devices for tangible, multi touch and free hand interactions. The idea is to design devices able to work also in uncontrolled environments, and in situations where control is mostly of the physical type where even the less experienced users can express their manipulative exploration and gesture communication abilities. We also analyze how it can be possible to mix these techniques to create an interactive space, specifically designed for teamwork where the natural interfaces are distributed in order to encourage collaboration. We then give some examples of how these interactive scenarios can host various types of applications facilitating, for instance, the exploration of 3D models, the enjoyment of multimedia contents and social interaction. Finally we discuss our results and put them in a wider context, focusing our attention particularly on how the proposed interfaces actually improve people’s lives and activities and the interactive spaces become a place of aggregation where we can pursue objectives that are both personal and shared with others

    Interactive Spaces Natural interfaces supporting gestures and manipulations in interactive spaces

    Get PDF
    This doctoral dissertation focuses on the development of interactive spaces through the use of natural interfaces based on gestures and manipulative actions. In the real world people use their senses to perceive the external environment and they use manipulations and gestures to explore the world around them, communicate and interact with other individuals. From this perspective the use of natural interfaces that exploit the human sensorial and explorative abilities helps filling the gap between physical and digital world. In the first part of this thesis we describe the work made for improving interfaces and devices for tangible, multi touch and free hand interactions. The idea is to design devices able to work also in uncontrolled environments, and in situations where control is mostly of the physical type where even the less experienced users can express their manipulative exploration and gesture communication abilities. We also analyze how it can be possible to mix these techniques to create an interactive space, specifically designed for teamwork where the natural interfaces are distributed in order to encourage collaboration. We then give some examples of how these interactive scenarios can host various types of applications facilitating, for instance, the exploration of 3D models, the enjoyment of multimedia contents and social interaction. Finally we discuss our results and put them in a wider context, focusing our attention particularly on how the proposed interfaces actually improve people’s lives and activities and the interactive spaces become a place of aggregation where we can pursue objectives that are both personal and shared with others

    Developing a Complex User Interface for Mobile Data Collection Applications

    Get PDF
    The use of paper-based questionnaires for collecting data reveals several downsides, including logistical, cost-related and data quality issues. Despite the increasing digitization and the possibilities evolving from the latter, paper-based questionnaires remained ubiquitous in many application domains. Reasons for this may be insufficient IT knowledge from domain experts, the high development costs for dedicated digital solutions or the lack of domain-specific functionality and ease of use in existing software. In order to solve these issues, the QuestionSys framework aims to pursue a digital and easy-to-use approach for collecting data in large-scale scenarios. By providing software solutions for configuring digital questionnaires, executing those questionnaires on mobile devices and evaluating collected results, the framework attempts to support the entire data collection life cycle. In the context of this thesis, a sophisticated user interface for the QuestionSys mobile application was developed. Thereby, an in-depth look at common usability and user interface guidelines for mobile operating systems is taken in this thesis. Further, this thesis presents potential use case scenarios for such an application and their requirements. The user interface is discussed and explained alongside various screenshots of the developed mobile application

    Designing wearable interfaces for blind people

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), Universidade de Lisboa, faculdade de Ciências, 2015Hoje em dia os dispositivos com ecrã táctil, estão cada vez mais onipresentes. Até recentemente, a maioria dos ecrãs sensíveis ao toque forneciam poucos recursos de acessibilidade para deficientes visuais, deixando-os inutilizáveis. Sendo uma tecnologia tão presente no nosso quotidiano, como em telemóveis e tablets. Estes dispositivos são cada vez mais essenciais para a nossa vida, uma vez que, guardam muita informação pessoal, por exemplo, o pagamento através carteiras eletrónicas. A falta de acessibilidade deste tipo de ecrãs devem-se ao facto de estas interfaces serem baseadas no que os utilizadores veem no ecrã e em tocar no conteúdo apresentado neste. Isso torna-se num grande problema quando uma pessoa deficiente visual tenta usar estas interfaces. No mercado existem algumas soluções mas são quase todas baseadas em retorno áudio. Esta solução não é a melhor quando se trata de informação pessoal que a pessoa deseja manter privada. Por exemplo quando um utilizador está num autocarro e recebe uma mensagem, esta é lida por um leitor de ecrã através das colunas do dispositivo. Esta solução é prejudicial para a privacidade do utilizador, pois todas a pessoas `a sua volta irão ouvir o conteúdo da mensagem. Uma solução para este problema, poderá ser a utilização de vibração e de teclas físicas, que retiram a necessidade da utilização de leitores de ecrã. Contudo, para a navegação em menus a problemática mantém-se. Uma maneira de resolver este problema é através da utilização de uma interface baseada em gestos. Este tipo de interface é uma forma flexível e intuitiva de interação com este dispositivos. Até hoje, muitas abordagens têm vindo a apresentar soluções, no entanto não resolvem todos os pontos referidos. De uma maneira ou de outra estas abordagens terão de ser complementadas com outros dispositivos. Guerreiro e colegas (2012), apresentaram um protótipo que possibilita a leitura texto através de vibração, mas todo o impacto de uma utilização no dia a dia não é tido em conta. Um outro estudo realizado por Myung-Chul Cho (2002) apresenta um par de luvas para escrita codificada pelo alfabeto Braile, contudo não é testado para uma utilização com integração de uma componente de leitura, sem ser o retorno áudio. Dois outros estudos destacam-se, relativamente à utilização de gestos para navegação no dispositivo. Ruiz (2011), efetuou uma elicitação de gestos no ar, no entanto, eles não incluem pessoas invisuais no estudo, o que poderá levar à exclusão de tais utilizadores. Outro estudo apresentado por Kane (2011), inclui pessoas invisuais e destina-se a interações com gestos mas exigindo contacto físico com os ecrãs tácteis. A abordagem apresentada neste estudo integra as melhores soluções apresentadas num único dispositivo. O nosso objectivo principal é tornar os dispositivos de telemóveis mais acessíveis a pessoas invisuais, de forma serem integrados no seu quotidiano. Para isso, desenvolvemos uma interface baseada num par de luvas. O utilizador pode usá-las e com elas ler e escrever mensagens e ainda fazer gestos para outras tarefas. Este par de luvas aproveita o conhecimento sobre Braille por parte dos utilizadores para ler e escrever informação textual. Para a característica de leitura instalámos seis motores de vibração nos dedos da luva, no dedo indicador, no dedo do meio e no dedo anelar, de ambas as mãos. Estes motores simulam a configuração das teclas de uma máquina de escrever Braille, por exemplo, a Perkins Brailler. Para a parte de escrita, instalámos botões de pressão na ponta destes mesmos dedos, sendo cada um representante de um ponto de uma célula de Braille. Para a detecção de gestos optámos por uma abordagem através de um acelerómetro. Este encontra-se colocado nas costas da mão da luva. Para uma melhor utilização a luva é composta por duas camadas, e desta forma é possível instalar todos os componente entre as duas camadas de tecido, permitindo ao utilizador calçar e descalçar as luvas sem se ter que preocupar com os componentes eletrónicos. A construção das luvas assim como todos os testes realizados tiveram a participação de um grupo de pessoas invisuais, alunos e professores, da Fundação Raquel e Martin Sain. Para avaliarmos o desempenho do nosso dispositivo por invisuais realizámos alguns teste de recepcão (leitura) e de envio de mensagens (escrita). No teste de leitura foi realizado com um grupo apenas de pessoas invisuais. O teste consistiu em, receber letras em Braille, onde o utilizador replicava as vibrações sentidas, com os botões das luvas. Para isso avaliámos as taxas de reconhecimento de caracteres. Obtivemos uma média de 31 %, embora estes resultados sejam altamente dependentes das habilidades dos utilizadores. No teste de escrita, foi pedido uma letra ao utilizador e este escrevia em braille utilizando as luvas. O desempenho nesta componente foi em média 74 % de taxa de precisão. A maioria dos erros durante este teste estão ligados a erros, onde a diferença entre a palavra inicial e a escrita pelo utilizador, é de apenas um dedo. Estes testes foram bastante reveladores, relativamente à possível utilização destas luvas por pessoas invisuais. Indicaram-nos que os utilizadores devem ser treinados previamente para serem maximizados os resultados, e que pode ser necessário um pouco de experiencia com o dispositivo. O reconhecimento de gestos permite ao utilizador executar várias tarefas com um smartphone, tais como, atender/rejeitar uma chamada e navegar em menus. Para avaliar que gestos os utilizadores invisuais e normovisuais sugerem para a execução de tarefas em smartphones, realizámos um estudo de elicitação. Este estudo consiste em pedir aos utilizadores que sugiram gestos para a realização de tarefas. Descobrimos que a maioria dos gestos inventados pelos participantes tendem a ser físicos, em contexto, discreto e simples, e que utilizam apenas um ´unico eixo espacial. Concluímos também que existe um consenso, entre utilizadores, para todas as tarefas propostas. Além disso, o estudo de elicitação revelou que as pessoas invisuais preferem gestos mais simples, opondo-se a uma preferência por gestos mais complexos por parte de pessoas normovisuais. Sendo este um dispositivo que necessita de treino para reconhecimento de gestos, procurámos saber qual o tipo de treino é mais indicado para a sua utilização. Com os resultados obtidos no estudo de elicitação, comparámos treinos dos utilizadores individuais, treinos entre as das populações (invisuais e normovisuais) e um treino com ambas as populações (global). Descobrimos que um treino personalizado, ou seja, feito pelo próprio utilizador, é muito mais eficaz que um treino da população e um treino global. O facto de o utilizador poder enviar e receber mensagens, sem estar dependente de vários dispositivos e/ou aplicações contorna, as tão levantadas, questões de privacidade. Com o mesmo dispositivo o utilizador pode, ainda, navegar nos menus do seu smartphone, através de gestos simples e intuitivos. Os nossos resultados sugerem que será possível a utilização de um dispositivo wearable, no seio da comunidade invisual. Com o crescimento exponencial do mercado wearable e o esforço que a comunidade académica está a colocar nas tecnologias de acessibilidade, ainda existe uma grande margem para melhorar. Com este projeto, espera-se que os dispositivos portáteis de apoio irão desempenhar um papel importante na integração social das pessoas com deficiência, criando com isto uma sociedade mais igualitária e justa.Nowadays touch screens are ubiquitous, present in almost all modern devices. Most touch screens provide few accessibility features for blind people, leaving them partly unusable. There are some solutions, based on audio feedback, that help blind people to use touch screens in their daily tasks. The problem with those solutions raises privacy issues, since the content on screen is transmitted through the device speakers. Also, these screen readers make the interaction slow, and they are not easy to use. The main goal of this project is to develop a new wearable interface that allows blind people to interact with smartphones. We developed a pair of gloves that is capable to recognise mid-air gestures, and also allows the input and output of text. To evaluate the usability of input and output, we conducted a user study to assess character recognition and writing performance. Character recognition rates were highly user-dependent, and writing performance showed some problems, mostly related to one-finger issues. Then, we conducted an elicitation study to assess what type of gestures blind and sighted people suggest. Sighted people suggested more complex gestures, compared with blind people. However, all the gestures tend to be physical, in-context, discrete and simple, and use only a single axis. We also found that a training based on the user’s gestures is better for recognition accuracy. Nevertheless, the input and output text components still require new approaches to improve users performance. Still, this wearable interface seems promising for simple actions that do not require cognitive load. Overall, our results suggest that we are on track to make possible blind people interact with mobile devices in daily life

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Proceedings of the 4th Workshop on Interacting with Smart Objects 2015

    Get PDF
    These are the Proceedings of the 4th IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects
    corecore