2,737 research outputs found

    Autonomous behaviour in tangible user interfaces as a design factor

    Get PDF
    PhD ThesisThis thesis critically explores the design space of autonomous and actuated artefacts, considering how autonomous behaviours in interactive technologies might shape and influence users’ interactions and behaviours. Since the invention of gearing and clockwork, mechanical devices were built that both fascinate and intrigue people through their mechanical actuation. There seems to be something magical about moving devices, which draws our attention and piques our interest. Progress in the development of computational hardware is allowing increasingly complex commercial products to be available to broad consumer-markets. New technologies emerge very fast, ranging from personal devices with strong computational power to diverse user interfaces, like multi-touch surfaces or gestural input devices. Electronic systems are becoming smaller and smarter, as they comprise sensing, controlling and actuation. From this, new opportunities arise in integrating more sensors and technology in physical objects. These trends raise some specific questions around the impacts smarter systems might have on people and interaction: how do people perceive smart systems that are tangible and what implications does this perception have for user interface design? Which design opportunities are opened up through smart systems? There is a tendency in humans to attribute life-like qualities onto non-animate objects, which evokes social behaviour towards technology. Maybe it would be possible to build user interfaces that utilise such behaviours to motivate people towards frequent use, or even motivate them to build relationships in which the users care for their devices. Their aim is not to increase the efficiency of user interfaces, but to create interfaces that are more engaging to interact with and excite people to bond with these tangible objects. This thesis sets out to explore autonomous behaviours in physical interfaces. More specifically, I am interested in the factors that make a user interpret an interface as autonomous. Through a review of literature concerned with animated objects, autonomous technology and robots, I have mapped out a design space exploring the factors that are important in developing autonomous interfaces. Building on this and utilising workshops conducted with other researchers, I have vi developed a framework that identifies key elements for the design of Tangible Autonomous Interfaces (TAIs). To validate the dimensions of this framework and to further unpack the impacts on users of interacting with autonomous interfaces I have adopted a ‘research through design’ approach. I have iteratively designed and realised a series of autonomous, interactive prototypes, which demonstrate the potential of such interfaces to establish themselves as social entities. Through two deeper case studies, consisting of an actuated helium balloon and desktop lamp, I provide insights into how autonomy could be implemented into Tangible User Interfaces. My studies revealed that through their autonomous behaviour (guided by the framework) these devices established themselves, in interaction, as social entities. They furthermore turned out to be acceptable, especially if people were able to find a purpose for them in their lives. This thesis closes with a discussion of findings and provides specific implications for design of autonomous behaviour in interfaces

    Hand Gesture Interaction with Human-Computer

    Get PDF
    Hand gestures are an important modality for human computer interaction. Compared to many existing interfaces, hand gestures have the advantages of being easy to use, natural, and intuitive. Successful applications of hand gesture recognition include computer games control, human-robot interaction, and sign language recognition, to name a few. Vision-based recognition systems can give computers the capability of understanding and responding to hand gestures. The paper gives an overview of the field of hand gesture interaction with Human- Computer, and describes the early stages of a project about gestural command sets, an issue that has often been neglected. Currently we have built a first prototype for exploring the use of pieand marking menus in gesture-based interaction. The purpose is to study if such menus, with practice, could support the development of autonomous gestural command sets. The scenario is remote control of home appliances, such as TV sets and DVD players, which in the future could be extended to the more general scenario of ubiquitous computing in everyday situations. Some early observations are reported, mainly concerning problems with user fatigue and precision of gestures. Future work is discussed, such as introducing flow menus for reducing fatigue, and control menus for continuous control functions. The computer vision algorithms will also have to be developed further

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    DOGeye: Controlling your Home with Eye Interaction

    Get PDF
    Nowadays home automation, with its increased availability, reliability and with its ever reducing costs is gaining momentum and is starting to become a viable solution for enabling people with disabilities to autonomously interact with their homes and to better communicate with other people. However, especially for people with severe mobility impairments, there is still a lack of tools and interfaces for effective control and interaction with home automation systems, and general-purpose solutions are seldom applicable due to the complexity, asynchronicity, time dependent behavior, and safety concerns typical of the home environment. This paper focuses on user-environment interfaces based on the eye tracking technology, which often is the only viable interaction modality for users as such. We propose an eye-based interface tackling the specific requirements of smart environments, already outlined in a public Recommendation issued by the COGAIN European Network of Excellence. The proposed interface has been implemented as a software prototype based on the ETU universal driver, thus being potentially able to run on a variety of eye trackers, and it is compatible with a wide set of smart home technologies, handled by the Domotic OSGi Gateway. A first interface evaluation, with user testing sessions, has been carried and results show that the interface is quite effective and usable without discomfort by people with almost regular eye movement control

    Ubiquitous interactive displays: magical experiences beyond the screen

    Get PDF
    Ubiquitous Interactive Displays are interfaces that extend interaction beyond traditional flat screens. This thesis presents a series of proof-of-concept systems exploring three interactive displays: the first part of this thesis explores interactive projective displays, where the use of projected light transforms and enhances physical objects in our environment. The second part of this thesis explores gestural displays, where traditional mobile devices such as our smartphones are equipped with depth sensors to enable input and output around a device. Finally, I introduce a new tactile display that imbues our physical spaces with a sense of touch in mid air without requiring the user to wear a physical device. These systems explore a future where interfaces are inherently everywhere, connecting our physical objects and spaces together through visual, gestural and tactile displays. I aim to demonstrate new technical innovations as well as compelling interactions with one ore more users and their physical environment. These new interactive displays enable novel experiences beyond flat screens that blurs the line between the physical and virtual world

    MIFTel: a multimodal interactive framework based on temporal logic rules

    Get PDF
    Human-computer and multimodal interaction are increasingly used in everyday life. Machines are able to get more from the surrounding world, assisting humans in different application areas. In this context, the correct processing and management of signals provided by the environments is determinant for structuring the data. Different sources and acquisition times can be exploited for improving recognition results. On the basis of these assumptions, we are proposing a multimodal system that exploits Allen’s temporal logic combined with a prevision method. The main object is to correlate user’s events with system’s reactions. After post-elaborating coming data from different signal sources (RGB images, depth maps, sounds, proximity sensors, etc.), the system is managing the correlations between recognition/detection results and events in real-time to create an interactive environment for the user. For increasing the recognition reliability, a predictive model is also associated with the proposed method. The modularity of the system grants a full dynamic development and upgrade with custom modules. Finally, a comparison with other similar systems is shown, underlining the high flexibility and robustness of the proposed event management method

    Development and implementation of a mobile AR-Based assistance system on the Android-platform for the SmartFactory kl

    Full text link
    Campos GarcĂ­a, R. (2011). Development and implementation of a mobile AR-Based assistance system on the Android-platform for the SmartFactory kl. http://hdl.handle.net/10251/11632.Archivo delegad

    Compact and kinetic projected augmented reality interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 143-150).For quite some time, researchers and designers in the field of human computer interaction have strived to better integrate information interfaces into our physical environment. They envisioned a future where computing and interface components would be integrated into the physical environment, creating a seamless experience that uses all our senses. One possible approach to this problem employs projected augmented reality. Such systems project digital information and interfaces onto the physical world and are typically implemented using interactive projector-camera systems. This thesis work is centered on design and implementation of a new form factor for computing, a system we call LuminAR. LuminAR is a compact and kinetic projected augmented reality interface embodied in familiar everyday objects, namely a light bulb and a task light. It allows users to dynamically augment physical surfaces and objects with superimposed digital information using gestural and multi-touch interfaces. This thesis documents LuminAR's design process, hardware and software implementation and interaction techniques. The work is motivated through a set of applications that explore scenarios for interactive and kinetic projected augmented reality interfaces. It also opens the door for further explorations of kinetic interaction and promotes the adoption of projected augmented reality as a commonplace user interface modality. This thesis work was partially supported by a research grant from Intel Corporation.Supported by a research grant from Intel Corporationby Natan Linder.S.M

    Interaction Design: Foundations, Experiments

    Get PDF
    Interaction Design: Foundations, Experiments is the result of a series of projects, experiments and curricula aimed at investigating the foundations of interaction design in particular and design research in general. The first part of the book - Foundations - deals with foundational theoretical issues in interaction design. An analysis of two categorical mistakes -the empirical and interactive fallacies- forms a background to a discussion of interaction design as act design and of computational technology as material in design. The second part of the book - Experiments - describes a range of design methods, programs and examples that have been used to probe foundational issues through systematic questioning of what is given. Based on experimental design work such as Slow Technology, Abstract Information Displays, Design for Sound Hiders, Zero Expression Fashion, and IT+Textiles, this section also explores how design experiments can play a central role when developing new design theory
    • 

    corecore