8 research outputs found

    Pen and paper techniques for physical customisation of tabletop interfaces

    Get PDF

    A cuttable multi-touch sensor

    Get PDF
    We propose cutting as a novel paradigm for ad-hoc customization of printed electronic components. As a first instantiation, we contribute a printed capacitive multi-touch sensor, which can be cut by the end-user to modify its size and shape. This very direct manipulation allows the end-user to easily make real-world objects and surfaces touch-interactive, to augment physical prototypes and to enhance paper craft. We contribute a set of technical principles for the design of printable circuitry that makes the sensor more robust against cuts, damages and removed areas. This includes novel physical topologies and printed forward error correction. A technical evaluation compares different topologies and shows that the sensor remains functional when cut to a different shape.Deutsche Forschungsgemeinschaft (Cluster of Excellence Multimodal Computing and Interaction, German Federal Excellence Initiative

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    A malleable control structure for softwired user interfaces

    No full text
    Rather than existing as a computer input device with a rigid shape, a predetermined selection of controls and a fixed layout, a malleable control structure is made up of a set of controls that can be freely arranged on control areas. The structure is physically adaptable by users during operation: control areas and controls can be introduced, organized and removed to suit interaction requirements and personal preference.We present an implementation of a malleable control structure called VoodooIO. Our design contributes a novel material -- the network substrate - that can be used to transform everyday surfaces into control areas, and the concept of implementing basic control units (such as buttons, sliders or dials) as ad hoc network nodes.VoodooIO does not constitute an application interface in itself. Like any input device, it only becomes concrete as an interface component in the context of a particular application. We introduce the concept of softwiring as a collection of techniques and practices that allow users to benefit from malleable control interfaces in a number of concrete scenarios of use

    Autonomic ubiquitous computing: a home environment management system

    Get PDF
    Tese de doutoramento em Electrónica Industrial (ramo do conhecimento Informática Industrial)The Ubiquitous Computing and Autonomic Computing reached a point of convergence in which pervasive technology in the environment meets the ability of people to interact with it, making use of all the possibilities made available by this technology. Ubiquitous computing envisions a habitat where the abundance of devices, services and applications allows the physical and virtual worlds to become seamlessly merged. The promise of ubiquitous computing environments is not feasible unless these systems can effectively "disappear". In order to achieve this goal, they need to become autonomic, by managing their own evolution and configuration with minimal user intervention. It is in this context that aspects like self-configuration and self-healing from the autonomic computing concept were adopted in this project. The context awareness and the creation of applications which use that context are the core concern of Ubiquitous Computing Systems and represent the fundamentals for autonomic actions in this type of systems. Such research raises questions on context acquisition, distribution and manipulation, as well as on artificial intelligence algorithms that decide autonomic actions in the environment, having implications in the human interaction with Autonomic Ubiquitous Systems. The research presented in this thesis concentrates on some of those issues. During this project it was developed an experimental setup for context acquisition, in an effortless way, of some activities of a small group of users. This experimental setup was installed in a real home where a young family, a couple and a small child, were actually living. This experimental setup was mainly responsible for the control of the light system of the house, by a network of several inter-connected devices scattered in the home with limited resources. This prototype installation allowed the validation of the system ability, to capture daily life behaviour patterns of the inhabitants. The system architecture was designed based on the concept of a high level and a low level autonomic management system taking from nature the model of the human reflex arc. A reflexive behaviour is managed at a local level by the small devices, with limited resources, high level management is responsible for processing and analysis of the events broadcast by the group of small devices, and run in a centralized mode in a PC. The concept of device information broadcast, to the communication medium, as events was used as an approach to: inter-connect future systems, monitor correct operation of the system devices, capture raw data for estimation of context; allow the visualization of system feedback in user interface devices. Finally, an algorithm using artificial neural networks in combination with simple statistics was developed which allowed the house to learn the routines of its inhabitants, making it truly intelligent by embedding the knowledge about patterns of activities of the users in the devices scattered in the environment, increasing their comfort and, at same time, leading to more energy efficiency. The analysis of the data captured, during two complete years, shows that the reduction of power consumption could be as high as 50%, depending on the profile of the usage of the light.A Computação Ubíqua e a Computação Autónoma atingiram um ponto de convergência no qual a tecnologia dispersa nos ambientes, juntamente com a capacidade das pessoas interagirem, permite tirar partido do seu uso para novas potencialidades. A computação ubíqua vislumbra habitats repletos de dispositivos, serviços e aplicações que permitem a união perfeita do mundo real com o mundo virtual, mas de uma forma natural. A promessa da criação de tais ambientes de computação ubíqua não se tornará possível sem que a complexidade destes sistemas “desapareçam” efectivamente da percepção dos utilizadores. Para que isso seja possível, estes necessitam de ser autónomos, gerindo a sua própria evolução e configuração com o mínimo de intervenção do utilizador. É neste contexto que a noção de Sistemas Ubíquos Autónomos envolvendo as facetas de auto-configuração e auto-reparação derivadas do conceito da computação autónoma, será usada nesta tese. A percepção do contexto e a criação de aplicações que o usam são as principais preocupações na investigação dos Sistemas de Computação Ubíqua, constituindo também a base para as acções autónomas neste tipo de sistemas. Essa investigação levanta questões sobre a forma como o contexto é capturado, distribuído e manipulado. Por outro lado, provoca impacto nos algoritmos de inteligência artificial que efectuam as decisões de acções autónomas no ambiente, afectando consequentemente a interacção humana com os Sistemas Ubíquos Autónomos. A investigação apresentada nesta dissertação concentra-se efectivamente em alguns destes aspectos. Durante a tese foi desenvolvido um sistema experimental com o objectivo de capturar o contexto, de uma forma perceptível, das actividades de um pequeno grupo de utilizadores. Este sistema experimental foi instalado numa casa real, onde vive uma jovem família constituída por uma casal e uma pequena criança. O sistema experimental era responsável por controlar toda a iluminação eléctrica da casa, através de um conjunto de dispositivos, com recursos limitados, conectados em rede e espalhados pela casa. A instalação permitiu validar a capacidade do sistema de capturar os padrões de comportamento quotidiano dos habitantes da casa. A arquitectura do sistema foi projectada baseando-se no conceito de alto-nível e baixo-nivel dos sistemas de gestão autónoma, inspirando-se no modelo dos processos que ocorrem num acto reflexo no corpo humano. As acções de reflexo ou acções básicas são geridas pelo baixo-nivel nos pequenos dispositivos e com recursos limitados, e quanto o gestão de alto-nivel é responsável pelo processamento e analise dos eventos disponíveis no barramento de dados da rede dos pequenos dispositivos. Foi também usado o conceito da difusão (broadcast) da informação, para o barramento de dados, na forma de eventos para permitir: a interligação com sistema futuros, monitorização do correcto funcionamento do sistema, captura da informação para posterior determinação do contexto; e por fim permitir a visualização do estado do sistema na interface com os utilizadores. Por último, foi desenvolvido um algoritmo usando redes neuronais artificiais e em combinação com estatística básica que permite aprender, de uma forma autónoma, as rotinas dos habitantes em casa, conferindo a esta um ambiente inteligente. Desta forma, a casa contém o conhecimento dos padrões quotidianos dos habitantes, aumentando consequentemente o seu conforto e ao mesmo tempo, permitindo melhor eficiência energética. As análises dos dados capturados, durante dois anos completos, mostram que a redução no consumo energético pode chegar os 50%, dependendo do perfil de uso da iluminação.Fundação para a Ciência e a Tecnologia (FCT)Scholarship number SFRH/BD/8290/2004

    Bringing the Physical to the Digital

    Get PDF
    This dissertation describes an exploration of digital tabletop interaction styles, with the ultimate goal of informing the design of a new model for tabletop interaction. In the context of this thesis the term digital tabletop refers to an emerging class of devices that afford many novel ways of interaction with the digital. Allowing users to directly touch information presented on large, horizontal displays. Being a relatively young field, many developments are in flux; hardware and software change at a fast pace and many interesting alternative approaches are available at the same time. In our research we are especially interested in systems that are capable of sensing multiple contacts (e.g., fingers) and richer information such as the outline of whole hands or other physical objects. New sensor hardware enable new ways to interact with the digital. When embarking into the research for this thesis, the question which interaction styles could be appropriate for this new class of devices was a open question, with many equally promising answers. Many everyday activities rely on our hands ability to skillfully control and manipulate physical objects. We seek to open up different possibilities to exploit our manual dexterity and provide users with richer interaction possibilities. This could be achieved through the use of physical objects as input mediators or through virtual interfaces that behave in a more realistic fashion. In order to gain a better understanding of the underlying design space we choose an approach organized into two phases. First, two different prototypes, each representing a specific interaction style – namely gesture-based interaction and tangible interaction – have been implemented. The flexibility of use afforded by the interface and the level of physicality afforded by the interface elements are introduced as criteria for evaluation. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops is analyzed based on these criteria. In a second stage the learnings from these initial explorations are applied to inform the design of a novel model for digital tabletop interaction. This model is based on the combination of rich multi-touch sensing and a three dimensional environment enriched by a gaming physics simulation. The proposed approach enables users to interact with the virtual through richer quantities such as collision and friction. Enabling a variety of fine-grained interactions using multiple fingers, whole hands and physical objects. Our model makes digital tabletop interaction even more “natural”. However, because the interaction – the sensed input and the displayed output – is still bound to the surface, there is a fundamental limitation in manipulating objects using the third dimension. To address this issue, we present a technique that allows users to – conceptually – pick objects off the surface and control their position in 3D. Our goal has been to define a technique that completes our model for on-surface interaction and allows for “as-direct-as possible” interactions. We also present two hardware prototypes capable of sensing the users’ interactions beyond the table’s surface. Finally, we present visual feedback mechanisms to give the users the sense that they are actually lifting the objects off the surface. This thesis contributes on various levels. We present several novel prototypes that we built and evaluated. We use these prototypes to systematically explore the design space of digital tabletop interaction. The flexibility of use afforded by the interaction style is introduced as criterion alongside the user interface elements’ physicality. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops are analyzed. We present a new model for tabletop interaction that increases the fidelity of interaction possible in such settings. Finally, we extend this model so to enable as direct as possible interactions with 3D data, interacting from above the table’s surface

    Supporting the Development Process of Multimodal and Natural Automotive User Interfaces

    Get PDF
    Nowadays, driving a car places multi-faceted demands on the driver that go beyond maneuvering a vehicle through road traffic. The number of additional functions for entertainment, infotainment and comfort increased rapidly in the last years. Each new function in the car is designed to make driving as pleasant as possible but also increases the risk that the driver will be distracted from the primary driving task. One of the most important goals for designers of new and innovative automotive user interfaces is therefore to keep driver distraction to a minimum while providing an appropriate support to the driver. This goal can be achieved by providing tools and methods that support a human-centred development process. In this dissertation, a design space will be presented that helps to analyze the use of context, to generate new ideas for automotive user interfaces and to document them. Furthermore, new opportunities for rapid prototyping will be introduced. To be able to evaluate new automotive user interfaces and interaction concepts regarding their effect on driving performance, a driving simulation software was developed within the scope of this dissertation. In addition, research results in the field of multimodal, implicit and eye-based interaction in the car are presented. The different case studies mentioned illustrate the systematic and comprehensive research on the opportunities of these kinds of interaction, as well as their effects on driving performance. We developed a prototype of a vibration steering wheel that communicates navigation instructions. Another prototype of a steering wheel has a display integrated in the middle and enables handwriting input. A further case study explores a visual placeholder concept to assist drivers when using in-car displays while driving. When a driver looks at a display and then at the street, the last gaze position on the display is highlighted to assist the driver when he switches his attention back to the display. This speeds up the process of resuming an interrupted task. In another case study, we compared gaze-based interaction with touch and speech input. In the last case study, a driver-passenger video link system is introduced that enables the driver to have eye contact with the passenger without turning his head. On the whole, this dissertation shows that by using a new human-centred development process, modern interaction concepts can be developed in a meaningful way.Das Führen eines Fahrzeuges stellt heute vielfältige Ansprüche an den Fahrer, die über das reine Manövrieren im Straßenverkehr hinausgehen. Die Fülle an Zusatzfunktionen zur Unterhaltung, Navigation- und Komfortzwecken, die während der Fahrt genutzt werden können, ist in den letzten Jahren stark angestiegen. Einerseits dient jede neu hinzukommende Funktion im Fahrzeug dazu, das Fahren so angenehm wie möglich zu gestalten, birgt aber anderseits auch immer das Risiko, den Fahrer von seiner primären Fahraufgabe abzulenken. Eines der wichtigsten Ziele für Entwickler von neuen und innovativen Benutzungsschnittstellen im Fahrzeug ist es, die Fahrerablenkung so gering wie möglich zu halten und dabei dem Fahrer eine angemessene Unterstützung zu bieten. Werkzeuge und Methoden, die einen benutzerzentrierten Entwicklungsprozess unter-stützen, können helfen dieses Ziel zu erreichen. In dieser Dissertation wird ein Entwurfsraum vorgestellt, welcher helfen soll den Benutzungskontext zu analysieren, neue Ideen für Benutzungsschnittstellen zu generieren und diese zu dokumentieren. Darüber hinaus wurden im Rahmen der Arbeit neue Möglichkeiten zur schnellen Prototypenerstellung entwickelt. Es wurde ebenfalls eine Fahrsimulationssoftware erstellt, welche die quantitative Bewertung der Auswirkungen von Benutzungs-schnittstellen und Interaktionskonzepten auf die Fahreraufgabe ermöglicht. Desweiteren stellt diese Dissertation neue Forschungsergebnisse auf den Gebieten der multimodalen, impliziten und blickbasierten Interaktion im Fahrzeug vor. In verschiedenen Fallbeispielen wurden die Möglichkeiten dieser Interaktionsformen sowie deren Auswirkung auf die Fahrerablenkung umfassend und systematisch untersucht. Es wurde ein Prototyp eines Vibrationslenkrads erstellt, womit Navigations-information übermittelt werden können sowie ein weiterer Prototyp eines Lenkrads, welches ein Display in der Mitte integriert hat und damit handschriftliche Texteingabe ermöglicht. Ein visuelles Platzhalterkonzept ist im Fokus eines weiteren Fallbeispiels. Auf einem Fahrzeugdisplay wird die letzte Blickposition bevor der Fahrer seine Aufmerksamkeit dem Straßenverkehr zuwendet visuell hervorgehoben. Dies ermöglicht dem Fahrer eine unterbrochene Aufgabe z.B. das Durchsuchen einer Liste von Musik-titel schneller wieder aufzunehmen, wenn er seine Aufmerksamkeit wieder dem Display zuwendet. In einer weiteren Studie wurde blickbasierte Interaktion mit Sprach- und Berührungseingabe verglichen und das letzte Fallbeispiel beschäftigt sich mit der Unterstützung der Kommunikation im Fahrzeug durch die Bereitstellung eines Videosystems, welches Blickkontakt zwischen dem Fahrer und den Mitfahrern ermöglicht, ohne dass der Fahrer seinen Kopf drehen muss. Die Arbeit zeigt insgesamt, dass durch den Einsatz eines neuen benutzerzentrierten Entwicklungsprozess moderne Interaktionskonzept sinnvoll entwickelt werden können

    Integrating Usability Models into Pervasive Application Development

    Get PDF
    This thesis describes novel processes in two important areas of human-computer interaction (HCI) and demonstrates ways to combine these in appropriate ways. First, prototyping plays an essential role in the development of complex applications. This is especially true if a user-centred design process is followed. We describe and compare a set of existing toolkits and frameworks that support the development of prototypes in the area of pervasive computing. Based on these observations, we introduce the EIToolkit that allows the quick generation of mobile and pervasive applications, and approaches many issues found in previous works. Its application and use is demonstrated in several projects that base on the architecture and an implementation of the toolkit. Second, we present novel results and extensions in user modelling, specifically for predicting time to completion of tasks. We extended established concepts such as the Keystroke-Level Model to novel types of interaction with mobile devices, e.g. using optical markers and gestures. The design, creation, as well as a validation of this model are presented in some detail in order to show its use and usefulness for making usability predictions. The third part is concerned with the combination of both concepts, i.e. how to integrate user models into the design process of pervasive applications. We first examine current ways of developing and show generic approaches to this problem. This leads to a concrete implementation of such a solution. An innovative integrated development environment is provided that allows for quickly developing mobile applications, supports the automatic generation of user models, and helps in applying these models early in the design process. This can considerably ease the process of model creation and can replace some types of costly user studies.Diese Dissertation beschreibt neuartige Verfahren in zwei wichtigen Bereichen der Mensch-Maschine-Kommunikation und erläutert Wege, diese geeignet zu verknüpfen. Zum einen spielt die Entwicklung von Prototypen insbesondere bei der Verwendung von benutzerzentrierten Entwicklungsverfahren eine besondere Rolle. Es werden daher auf der einen Seite eine ganze Reihe vorhandener Arbeiten vorgestellt und verglichen, die die Entwicklung prototypischer Anwendungen speziell im Bereich des Pervasive Computing unterstützen. Ein eigener Satz an Werkzeugen und Komponenten wird präsentiert, der viele der herausgearbeiteten Nachteile und Probleme solcher existierender Projekte aufgreift und entsprechende Lösungen anbietet. Mehrere Beispiele und eigene Arbeiten werden beschrieben, die auf dieser Architektur basieren und entwickelt wurden. Auf der anderen Seite werden neue Forschungsergebnisse präsentiert, die Erweiterungen von Methoden in der Benutzermodellierung speziell im Bereich der Abschätzung von Interaktionszeiten beinhalten. Mit diesen in der Dissertation entwickelten Erweiterungen können etablierte Konzepte wie das Keystroke-Level Model auf aktuelle und neuartige Interaktionsmöglichkeiten mit mobilen Geräten angewandt werden. Der Entwurf, das Erstellen sowie eine Validierung der Ergebnisse dieser Erweiterungen werden detailliert dargestellt. Ein dritter Teil beschäftigt sich mit Möglichkeiten die beiden beschriebenen Konzepte, zum einen Prototypenentwicklung im Pervasive Computing und zum anderen Benutzermodellierung, geeignet zu kombinieren. Vorhandene Ansätze werden untersucht und generische Integrationsmöglichkeiten beschrieben. Dies führt zu konkreten Implementierungen solcher Lösungen zur Integration in vorhandene Umgebungen, als auch in Form einer eigenen Applikation spezialisiert auf die Entwicklung von Programmen für mobile Geräte. Sie erlaubt das schnelle Erstellen von Prototypen, unterstützt das automatische Erstellen spezialisierter Benutzermodelle und ermöglicht den Einsatz dieser Modelle früh im Entwicklungsprozess. Dies erleichtert die Anwendung solcher Modelle und kann Aufwand und Kosten für entsprechende Benutzerstudien einsparen
    corecore