266 research outputs found

    Human factors of ubiquitous computing: ambient cueing in the digital kitchen?

    Get PDF
    This thesis is concerned with the uses of Ubiquitous Computing (UbiComp) in everyday domestic environments. The concept of UbiComp promises to shift computing away from the desktop into everyday objects and settings. It has the twin goals of providing ‘transparent’ technologies where the information has been thoroughly embedded into everyday activities and objects (thus making the computer invisible to the user) and also (and more importantly) of seamless integration of these technologies into the activities of their users. However, this raises the challenge of how best to support interaction with a ‘transparent’ or ‘invisible’ technology; if the technology is made visible, it will attract the user's attention to it and away from the task at hand, but if it is hidden, then how can the user cope with malfunctions or other problems in the technology? We approach the design of Human-Computer Interaction in the ubiquitous environment through the use of ambient displays, i.e. the use of subtle cueing, embedded in the environment which is intended to guide human activity. This thesis draws on the concept of stimulus-response compatibility and applies this to the design ambient display. This thesis emphasizes the need to understand the users’ perspectives and responses in any particular approach that has been proposed. Therefore, the main contributions of this thesis focus on approaches to improve human performance in the ubiquitous environment through ambient display

    Sensitive and Makeable Computational Materials for the Creation of Smart Everyday Objects

    Get PDF
    The vision of computational materials is to create smart everyday objects using the materi- als that have sensing and computational capabilities embedded into them. However, today’s development of computational materials is limited because its interfaces (i.e. sensors) are unable to support wide ranges of human interactions , and withstand the fabrication meth- ods of everyday objects (e.g. cutting and assembling). These barriers hinder citizens from creating smart every day objects using computational materials on a large scale. To overcome the barriers, this dissertation presents the approaches to develop compu- tational materials to be 1) sensitive to a wide variety of user interactions, including explicit interactions (e.g. user inputs) and implicit interactions (e.g. user contexts), and 2) makeable against a wide range of fabrication operations, such cutting and assembling. I exemplify the approaches through five research projects on two common materials, textile and wood. For each project, I explore how a material interface can be made to sense user inputs or activities, and how it can be optimized to balance sensitivity and fabrication complexity. I discuss the sensing algorithms and machine learning model to interpret the sensor data as high-level abstraction and interaction. I show the practical applications of developed computational materials. I demonstrate the evaluation study to validate their performance and robustness. In the end of this dissertation, I summarize the contributions of my thesis and discuss future directions for the vision of computational materials

    A survey on wireless indoor localization from the device perspective

    Get PDF
    With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals. This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.</jats:p

    Human activity recognition for pervasive interaction

    Get PDF
    PhD ThesisThis thesis addresses the challenge of computing food preparation context in the kitchen. The automatic recognition of fine-grained human activities and food ingredients is realized through pervasive sensing which we achieve by instrumenting kitchen objects such as knives, spoons, and chopping boards with sensors. Context recognition in the kitchen lies at the heart of a broad range of real-world applications. In particular, activity and food ingredient recognition in the kitchen is an essential component for situated services such as automatic prompting services for cognitively impaired kitchen users and digital situated support for healthier eating interventions. Previous works, however, have addressed the activity recognition problem by exploring high-level-human activities using wearable sensing (i.e. worn sensors on human body) or using technologies that raise privacy concerns (i.e. computer vision). Although such approaches have yielded significant results for a number of activity recognition problems, they are not applicable to our domain of investigation, for which we argue that the technology itself must be genuinely “invisible”, thereby allowing users to perform their activities in a completely natural manner. In this thesis we describe the development of pervasive sensing technologies and algorithms for finegrained human activity and food ingredient recognition in the kitchen. After reviewing previous work on food and activity recognition we present three systems that constitute increasingly sophisticated approaches to the challenge of kitchen context recognition. Two of these systems, Slice&Dice and Classbased Threshold Dynamic Time Warping (CBT-DTW), recognize fine-grained food preparation activities. Slice&Dice is a proof-of-concept application, whereas CBT-DTW is a real-time application that also addresses the problem of recognising unknown activities. The final system, KitchenSense is a real-time context recognition framework that deals with the recognition of a more complex set of activities, and includes the recognition of food ingredients and events in the kitchen. For each system, we describe the prototyping of pervasive sensing technologies, algorithms, as well as real-world experiments and empirical evaluations that validate the proposed solutions.Vietnamese government’s 322 project, executed by the Vietnamese Ministry of Education and Training

    Tangible interaction with anthropomorphic smart objects in instrumented environments

    Get PDF
    A major technological trend is to augment everyday objects with sensing, computing and actuation power in order to provide new services beyond the objects' traditional purpose, indicating that such smart objects might become an integral part of our daily lives. To be able to interact with smart object systems, users will obviously need appropriate interfaces that regard their distinctive characteristics. Concepts of tangible and anthropomorphic user interfaces are combined in this dissertation to create a novel paradigm for smart object interaction. This work provides an exploration of the design space, introduces design guidelines, and provides a prototyping framework to support the realisation of the proposed interface paradigm. Furthermore, novel methods for expressing personality and emotion by auditory means are introduced and elaborated, constituting essential building blocks for anthropomorphised smart objects. Two experimental user studies are presented, confirming the endeavours to reflect personality attributes through prosody-modelled synthetic speech and to express emotional states through synthesised affect bursts. The dissertation concludes with three example applications, demonstrating the potentials of the concepts and methodologies elaborated in this thesis.Die Integration von Informationstechnologie in GebrauchsgegenstĂ€nde ist ein gegenwĂ€rtiger technologischer Trend, welcher es AlltagsgegenstĂ€nden ermöglicht, durch den Einsatz von Sensorik, Aktorik und drahtloser Kommunikation neue Dienste anzubieten, die ĂŒber den ursprĂŒnglichen Zweck des Objekts hinausgehen. Die Nutzung dieser sogenannten Smart Objects erfordert neuartige Benutzerschnittstellen, welche die speziellen Eigenschaften und Anwendungsbereiche solcher Systeme berĂŒcksichtigen. Konzepte aus den Bereichen Tangible Interaction und Anthropomorphe Benutzerschnittstellen werden in dieser Dissertation vereint, um ein neues Interaktionsparadigma fĂŒr Smart Objects zu entwickeln. Die vorliegende Arbeit untersucht dafĂŒr die Gestaltungsmöglichkeiten und zeigt relevante Aspekte aus verwandten Disziplinen auf. Darauf aufbauend werden Richtlinien eingefĂŒhrt, welche den Entwurf von Benutzerschnittstellen nach dem hier vorgestellten Ansatz begleiten und unterstĂŒtzen sollen. FĂŒr eine prototypische Implementierung solcher Benutzerschnittstellen wird eine Architektur vorgestellt, welche die Anforderungen von Smart Object Systemen in instrumentierten Umgebungen berĂŒcksichtigt. Ein wichtiger Bestandteil stellt dabei die Sensorverarbeitung dar, welche unter anderem eine Interaktionserkennung am Objekt und damit auch eine physikalische Eingabe ermöglicht. Des Weiteren werden neuartige Methoden fĂŒr den auditiven Ausdruck von Emotion und Persönlichkeit entwickelt, welche essentielle Bausteine fĂŒr anthropomorphisierte Smart Objects darstellen und in Benutzerstudien untersucht wurden. Die Dissertation schliesst mit der Beschreibung von drei Applikationen, welche im Rahmen der Arbeit entwickelt wurden und das Potential der hier erarbeiteten Konzepte und Methoden widerspiegeln

    Designing Familiar Open Surfaces

    Get PDF
    While participatory design makes end-users part of the design process, we might also want the resulting system to be open for interpretation, appropriation and change over time to reflect its usage. But how can we design for appropriation? We need to strike a good balance between making the user an active co-constructor of system functionality versus making a too strong, interpretative design that does it all for the user thereby inhibiting their own creative use of the system. Through revisiting five systems in which appropriation has happened both within and outside the intended use, we are going to show how it can be possible to design with open surfaces. These open surfaces have to be such that users can fill them with their own interpretation and content, they should be familiar to the user, resonating with their real world practice and understanding, thereby shaping its use

    Integrated Control of Microfluidics – Application in Fluid Routing, Sensor Synchronization, and Real-Time Feedback Control

    Get PDF
    Microfluidic applications range from combinatorial chemical synthesis to high-throughput screening, with platforms integrating analog perfusion components, digitally controlled microvalves, and a range of sensors that demand a variety of communication protocols. A comprehensive solution for microfluidic control has to support an arbitrary combination of microfluidic components and to meet the demand for easy-to-operate system as it arises from the growing community of unspecialized microfluidics users. It should also be an easy to modify and extendable platform, which offer an adequate computational resources, preferably without a need for a local computer terminal for increased mobility. Here we will describe several implementation of microfluidics control technologies and propose a microprocessor-based unit that unifies them. Integrated control can streamline the generation process of complex perfusion sequences required for sensor-integrated microfluidic platforms that demand iterative operation procedures such as calibration, sensing, data acquisition, and decision making. It also enables the implementation of intricate optimization protocols, which often require significant computational resources. System integration is an imperative developmental milestone for the field of microfluidics, both in terms of the scalability of increasingly complex platforms that still lack standardization, and the incorporation and adoption of emerging technologies in biomedical research. Here we describe a modular integration and synchronization of a complex multicomponent microfluidic platform

    Augmenting objects at home through programmable sensor tokens: A design journey

    Get PDF
    End-user development for the home has been gaining momentum in research. Previous works demonstrate feasibility and potential but there is a lack of analysis of the extent of technology needed and its impact on the diversity of activities that can be supported. We present a design exploration with a tangible end-user toolkit for programming smart tokens embedding different sensing technologies. Our system allows to augment physical objects with smart tags and use trigger-action programming with multiple triggers to define smart behaviors. We contribute through a field-oriented study that provided insights on (i) household&#39;s activities as emerging from people&#39;s lived experience in terms of high-level goals, their ephemerality or recurrence, and the types of triggers, actions and interactions with augmented objects, and (ii) the programmability needed for supporting desired behaviors. We conclude that, while trigger action covers most scenarios, more advanced programming and direct interaction with physical objects spur novel uses.This work was supported by the 2015 UC3M Mobility Grant, the Spanish Ministry of Economy and Competitivity (TIN2014-56534-R, CREAx) and by the Academy of Finland (286440, Evidence)

    Exploration of programming by demonstration approaches for smart environments

    Get PDF
    The number of smart electronic devices like smartphones, tablet computers and embedded sensors/actuators in our domestic and work environment is constantly growing. Some of them work as a stand along devices while others already collaborate with each other. It is apparent that once a common layer for device intercommunication between major consumer device manufactures has been agreed upon, a new class of networked smart applications will rise. These applications will dynamically utilise required sensors and actuators of a smart environment to optimally achieve tasks for us human users. Inhabitants of such environments are already interacting with dozens of computers per day. A lot of research has addressed many issues in hardware and software for the future smart environments But few have focused on the users. An important research topic lies in finding simple, intuitive yet powerful enough approaches to allow end-users to create and modify the behaviour of smart environments in which they live and work according to their needs. I believe that for the ubiquitous computing environments to reach its full potential, enabling end-user programming is one of the important criteria. This thesis describes the exploration of various approaches for "Do It Yourself" philosophy in smart environment applications by providing inhabitants with the appropriate tools which empower them to build their environments in accordance to their needs and with enough room for personal creativity. To this end, I choose speech as the main input by the end users along with demonstration of certain parts of over all approach in building applications for smart environments. The resulting application is built on top of the meSchup platform developed during meSchup FP7 EU project at the VIS institute in Stuttgart which provides a middleware for seamlessly interconnecting heterogeneous devices. The resulting web application is called "Speechweaver" which combines speech, programming by demonstration and automatic code generation into usable and intuitive approach for creating and modifying the rule based behaviour of smart environments in place
    • 

    corecore