9 research outputs found

    Earables: Wearable Computing on the Ears

    Get PDF
    Kopfhörer haben sich bei Verbrauchern durchgesetzt, da sie private AudiokanĂ€le anbieten, zum Beispiel zum Hören von Musik, zum Anschauen der neuesten Filme wĂ€hrend dem Pendeln oder zum freihĂ€ndigen Telefonieren. Dank diesem eindeutigen primĂ€ren Einsatzzweck haben sich Kopfhörer im Vergleich zu anderen Wearables, wie zum Beispiel Smartglasses, bereits stĂ€rker durchgesetzt. In den letzten Jahren hat sich eine neue Klasse von Wearables herausgebildet, die als "Earables" bezeichnet werden. Diese GerĂ€te sind so konzipiert, dass sie in oder um die Ohren getragen werden können. Sie enthalten verschiedene Sensoren, um die FunktionalitĂ€t von Kopfhörern zu erweitern. Die rĂ€umliche NĂ€he von Earables zu wichtigen anatomischen Strukturen des menschlichen Körpers bietet eine ausgezeichnete Plattform fĂŒr die Erfassung einer Vielzahl von Eigenschaften, Prozessen und AktivitĂ€ten. Auch wenn im Bereich der Earables-Forschung bereits einige Fortschritte erzielt wurden, wird deren Potenzial aktuell nicht vollstĂ€ndig abgeschöpft. Ziel dieser Dissertation ist es daher, neue Einblicke in die Möglichkeiten von Earables zu geben, indem fortschrittliche SensorikansĂ€tze erforscht werden, welche die Erkennung von bisher unzugĂ€nglichen PhĂ€nomenen ermöglichen. Durch die EinfĂŒhrung von neuartiger Hardware und Algorithmik zielt diese Dissertation darauf ab, die Grenzen des Erreichbaren im Bereich Earables zu verschieben und diese letztlich als vielseitige Sensorplattform zur Erweiterung menschlicher FĂ€higkeiten zu etablieren. Um eine fundierte Grundlage fĂŒr die Dissertation zu schaffen, synthetisiert die vorliegende Arbeit den Stand der Technik im Bereich der ohr-basierten Sensorik und stellt eine einzigartig umfassende Taxonomie auf der Basis von 271 relevanten Publikationen vor. Durch die Verbindung von Low-Level-Sensor-Prinzipien mit Higher-Level-PhĂ€nomenen werden in der Dissertation anschließ-end Arbeiten aus verschiedenen Bereichen zusammengefasst, darunter (i) physiologische Überwachung und Gesundheit, (ii) Bewegung und AktivitĂ€t, (iii) Interaktion und (iv) Authentifizierung und Identifizierung. Diese Dissertation baut auf der bestehenden Forschung im Bereich der physiologischen Überwachung und Gesundheit mit Hilfe von Earables auf und stellt fortschrittliche Algorithmen, statistische Auswertungen und empirische Studien vor, um die Machbarkeit der Messung der Atemfrequenz und der Erkennung von Episoden erhöhter Hustenfrequenz durch den Einsatz von In-Ear-Beschleunigungsmessern und Gyroskopen zu demonstrieren. Diese neuartigen Sensorfunktionen unterstreichen das Potenzial von Earables, einen gesĂŒnderen Lebensstil zu fördern und eine proaktive Gesundheitsversorgung zu ermöglichen. DarĂŒber hinaus wird in dieser Dissertation ein innovativer Eye-Tracking-Ansatz namens "earEOG" vorgestellt, welcher AktivitĂ€tserkennung erleichtern soll. Durch die systematische Auswertung von Elektrodenpotentialen, die um die Ohren herum mittels eines modifizierten Kopfhörers gemessen werden, eröffnet diese Dissertation einen neuen Weg zur Messung der Blickrichtung. Dabei ist das Verfahren weniger aufdringlich und komfortabler als bisherige AnsĂ€tze. DarĂŒber hinaus wird ein Regressionsmodell eingefĂŒhrt, um absolute Änderungen des Blickwinkels auf der Grundlage von earEOG vorherzusagen. Diese Entwicklung eröffnet neue Möglichkeiten fĂŒr Forschung, welche sich nahtlos in das tĂ€gliche Leben integrieren lĂ€sst und tiefere Einblicke in das menschliche Verhalten ermöglicht. Weiterhin zeigt diese Arbeit, wie sich die einzigarte Bauform von Earables mit Sensorik kombinieren lĂ€sst, um neuartige PhĂ€nomene zu erkennen. Um die Interaktionsmöglichkeiten von Earables zu verbessern, wird in dieser Dissertation eine diskrete Eingabetechnik namens "EarRumble" vorgestellt, die auf der freiwilligen Kontrolle des Tensor Tympani Muskels im Mittelohr beruht. Die Dissertation bietet Einblicke in die Verbreitung, die Benutzerfreundlichkeit und den Komfort von EarRumble, zusammen mit praktischen Anwendungen in zwei realen Szenarien. Der EarRumble-Ansatz erweitert das Ohr von einem rein rezeptiven Organ zu einem Organ, das nicht nur Signale empfangen, sondern auch Ausgangssignale erzeugen kann. Im Wesentlichen wird das Ohr als zusĂ€tzliches interaktives Medium eingesetzt, welches eine freihĂ€ndige und augenfreie Kommunikation zwischen Mensch und Maschine ermöglicht. EarRumble stellt eine Interaktionstechnik vor, die von den Nutzern als "magisch und fast telepathisch" beschrieben wird, und zeigt ein erhebliches ungenutztes Potenzial im Bereich der Earables auf. Aufbauend auf den vorhergehenden Ergebnissen der verschiedenen Anwendungsbereiche und Forschungserkenntnisse mĂŒndet die Dissertation in einer offenen Hard- und Software-Plattform fĂŒr Earables namens "OpenEarable". OpenEarable umfasst eine Reihe fortschrittlicher Sensorfunktionen, die fĂŒr verschiedene ohrbasierte Forschungsanwendungen geeignet sind, und ist gleichzeitig einfach herzustellen. Hierdurch werden die EinstiegshĂŒrden in die ohrbasierte Sensorforschung gesenkt und OpenEarable trĂ€gt somit dazu bei, das gesamte Potenzial von Earables auszuschöpfen. DarĂŒber hinaus trĂ€gt die Dissertation grundlegenden Designrichtlinien und Referenzarchitekturen fĂŒr Earables bei. Durch diese Forschung schließt die Dissertation die LĂŒcke zwischen der Grundlagenforschung zu ohrbasierten Sensoren und deren praktischem Einsatz in realen Szenarien. Zusammenfassend liefert die Dissertation neue Nutzungsszenarien, Algorithmen, Hardware-Prototypen, statistische Auswertungen, empirische Studien und Designrichtlinien, um das Feld des Earable Computing voranzutreiben. DarĂŒber hinaus erweitert diese Dissertation den traditionellen Anwendungsbereich von Kopfhörern, indem sie die auf Audio fokussierten GerĂ€te zu einer Plattform erweitert, welche eine Vielzahl fortschrittlicher SensorfĂ€higkeiten bietet, um Eigenschaften, Prozesse und AktivitĂ€ten zu erfassen. Diese Neuausrichtung ermöglicht es Earables sich als bedeutende Wearable Kategorie zu etablieren, und die Vision von Earables als eine vielseitige Sensorenplattform zur Erweiterung der menschlichen FĂ€higkeiten wird somit zunehmend realer

    Enhancing Energy Efficiency and Privacy Protection of Smart Devices

    Get PDF
    Smart devices are experiencing rapid development and great popularity. Various smart products available nowadays have largely enriched people’s lives. While users are enjoying their smart devices, there are two major user concerns: energy efficiency and privacy protection. In this dissertation, we propose solutions to enhance energy efficiency and privacy protection on smart devices. First, we study different ways to handle WiFi broadcast frames during smartphone suspend mode. We reveal the dilemma of existing methods: either receive all of them suffering high power consumption, or receive none of them sacrificing functionalities. to address the dilemma, we propose Software Broadcast Filter (SBF). SBF is smarter than the “receive-none” method as it only blocks useless broadcast frames and does not impair application functionalities. SBF is also more energy efficient than the “receive-all” method. Our trace driven evaluation shows that SBF saves up to 49.9% energy consumption compared to the “receive-all” method. Second, we design a system, namely HIDE, to further reduce smartphone energy wasted on useless WiFi broadcast frames. With the HIDE system, smartphones in suspend mode do not receive useless broadcast frames or wake up to process use- less broadcast frames. Our trace-driven simulation shows that the HIDE system saves 34%-75% energy for the Nexus One phone when 10% of the broadcast frames are useful to the smartphone. Our overhead analysis demonstrates that the HIDE system has negligible impact on network capacity and packet round-trip time. Third, to better protect user privacy, we propose a continuous and non-invasive authentication system for wearable glasses, namely GlassGuard. GlassGuard discriminates the owner and an imposter with biometric features from touch gestures and voice commands, which are all available during normal user interactions. With data collected from 32 users on Google Glass, we show that GlassGuard achieves a 99% detection rate and a 0.5% false alarm rate after 3.5 user events on average when all types of user events are available with equal probability. Under five typical usage scenarios, the system has a detection rate above 93% and a false alarm rate below 3% after less than 5 user events

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Secure Authentication for Mobile Users

    Get PDF
    RÉSUMÉ :L’authentification biomĂ©trique telle que les empreintes digitales et la biomĂ©trie faciale a changĂ© la principale mĂ©thode d’authentification sur les appareils mobiles. Les gens inscrivent facilement leurs modĂšles d’empreintes digitales ou de visage dans diffĂ©rents systĂšmes d’authentification pour profiter de leur accĂšs facile au smartphone sans avoir besoin de se souvenir et de saisir les codes PIN/mots de passe conventionnels. Cependant, ils ne sont pas conscients du fait qu’ils stockent leurs caractĂ©ristiques physiologiques ou comportementales durables sur des plates-formes non sĂ©curisĂ©es (c’est-Ă -dire sur des tĂ©lĂ©phones mobiles ou sur un stockage en nuage), menaçant la confidentialitĂ© de leurs modĂšles biomĂ©triques et de leurs identitĂ©s. Par consĂ©quent, un schĂ©ma d’authentification est nĂ©cessaire pour prĂ©server la confidentialitĂ© des modĂšles biomĂ©triques des utilisateurs et les authentifier en toute sĂ©curitĂ© sans compter sur des plates-formes non sĂ©curisĂ©es et non fiables.La plupart des Ă©tudes ont envisagĂ© des approches logicielles pour concevoir un systĂšme d’authentification sĂ©curisĂ©. Cependant, ces approches ont montrĂ© des limites dans les systĂšmes d’authentification sĂ©curisĂ©s. Principalement, ils souffrent d’une faible prĂ©cision de vĂ©rification, en raison des transformations du gabarit (cancelable biometrics), de la fuite d’informations (fuzzy commitment schemes) ou de la rĂ©ponse de vĂ©rification non en temps rĂ©el, en raison des calculs coĂ»teux (homomorphic encryption).---------- ABSTRACT: Biometric authentication such as fingerprint and face biometrics has changed the main authentication method on mobile devices. People easily enroll their fingerprint or face template on different authentication systems to take advantage of their easy access to the smartphone with no need to remember and enter the conventional PINs/passwords. However, they are not aware that they store their long-lasting physiological or behavioral characteristics on insecure platforms (i.e., on mobile phones or on cloud storage), threatening the privacy of their biometric templates and their identities. Therefore, an authentication scheme is required to preserve the privacy of users’ biometric templates and securely authenticate them without relying on insecure and untrustworthy platforms. Most studies have considered software-based approaches to design a privacy-reserving authentication system. However, these approaches have shown limitations in secure authentication systems. Mainly, they suffer from low verification accuracy, due to the template transformations (in cancelable biometrics), information leakage (in fuzzy commitment schemes), or non real-time verification response, due to the expensive computations (in homomorphic encryption)

    Understanding IoT Security Through the Data Crystal Ball: Where We Are Now and Where We Are Going To Be

    Get PDF
    Inspired by the boom of the consumer IoT market, many device manufacturers, new start-up companies and technology behemoths have jumped into the space. Indeed, in a span of less than 5 years, we have experienced the manifestation of an array of solutions for the smart home, smart cities and even smart cars. Unfortunately, the exciting utility and rapid marketization of IoTs, come at the expense of privacy and security. Online and industry reports, and academic work have revealed a number of attacks on IoT systems, resulting in privacy leakage, property loss and even large-scale availability problems on some of the most influential Internet services (e.g. Netflix, Twitter). To mitigate such threats, a few new solutions have been proposed. However, it is still less clear what are the impacts they can have on the IoT ecosystem. In this work, we aim to perform a comprehensive study on reported attacks and defenses in the realm of IoTs aiming to find out what we know, where the current studies fall short and how to move forward. To this end, we first build a toolkit that searches through massive amount of online data using semantic analysis to identify over 3000 IoT-related articles (papers, reports and news). Further, by clustering such collected data using machine learning technologies, we are able to compare academic views with the findings from industry and other sources, in an attempt to understand the gaps between them, the trend of the IoT security risks and new problems that need further attention. We systemize this process, by proposing a taxonomy for the IoT ecosystem and organizing IoT security into five problem areas. We use this taxonomy as a beacon to assess each IoT work across a number of properties we define. Our assessment reveals that despite the acknowledged and growing concerns on IoT from both industry and academia, relevant security and privacy problems are far from solved. We discuss how each proposed solution can be applied to a problem area and highlight their strengths, assumptions and constraints. We stress the need for a security framework for IoT vendors and discuss the trend of shifting security liability to external or centralized entities. We also identify open research problems and provide suggestions towards a secure IoT ecosystem

    Learning-Based Ubiquitous Sensing For Solving Real-World Problems

    Get PDF
    Recently, as the Internet of Things (IoT) technology has become smaller and cheaper, ubiquitous sensing ability within these devices has become increasingly accessible. Learning methods have also become more complex in the field of computer science ac- cordingly. However, there remains a gap between these learning approaches and many problems in other disciplinary fields. In this dissertation, I investigate four different learning-based studies via ubiquitous sensing for solving real-world problems, such as in IoT security, athletics, and healthcare. First, I designed an online intrusion detection system for IoT devices via power auditing. To realize the real-time system, I created a lightweight power auditing device. With this device, I developed a distributed Convolutional Neural Network (CNN) for online inference. I demonstrated that the distributed system design is secure, lightweight, accurate, real-time, and scalable. Furthermore, I characterized potential Information-stealer attacks via power auditing. To defend against this potential exfiltration attack, a prototype system was built on top of the botnet detection system. In a testbed environment, I defined and deployed an IoT Information-stealer attack. Then, I designed a detection classifier. Altogether, the proposed system is able to identify malicious behavior on endpoint IoT devices via power auditing. Next, I enhanced athletic performance via ubiquitous sensing and machine learning techniques. I first designed a metric called LAX-Score to quantify a collegiate lacrosse team’s athletic performance. To derive this metric, I utilized feature selection and weighted regression. Then, the proposed metric was statistically validated on over 700 games from the last three seasons of NCAA Division I women’s lacrosse. I also exam- ined the biometric sensing dataset obtained from a collegiate team’s athletes over the course of a season. I then identified the practice features that are most correlated with high-performance games. Experimental results indicate that LAX-Score provides insight into athletic performance quality beyond wins and losses. Finally, I studied the data of patients with Parkinson’s Disease. I secured the Inertial Measurement Unit (IMU) sensing data of 30 patients while they conducted pre-defined activities. Using this dataset, I measured tremor events during drawing activities for more convenient tremor screening. Our preliminary analysis demonstrates that IMU sensing data can identify potential tremor events in daily drawing or writing activities. For future work, deep learning-based techniques will be used to extract features of the tremor in real-time. Overall, I designed and applied learning-based methods across different fields to solve real-world problems. The results show that combining learning methods with domain knowledge enables the formation of solutions

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    An Activity-Centric Approach to Configuration Work in Distributed Interaction

    Get PDF
    The widespread introduction of new types of computing devices, such as smartphones, tablet computers, large interactive displays or even wearable devices, has led to setups in which users are interacting with a rich ecology of devices. These new device ecologies have the potential to introduce a whole new set of cross-device and cross-user interactions as well as to support seamless distributed workspaces that facilitate coordination and communication with other users. Because of the distributed nature of this paradigm, there is an intrinsic difficulty and overhead in managing and using these kind of complex device ecologies, which I refer to as configuration work. It is the effort required to set up, manage, communicate, understand and use information, applications and services that are distributed over all devices in use and people involved. Because current devices and their containing software are still document- and application-centric, they fail to capture and support the rich activities and context in which they are being used. This leaves users without a stable concept for cross-device information management, forcing them to perform a large amount of manual configuration work. In this dissertation, I explore an activity-centric approach to configuration work in distributed interaction. The central goal of this dissertation is to develop and apply concepts and ideas from Activity-Centric Computing to distributed interaction. Using the triangulation approach, I explore these concepts on a conceptual, empirical and technological level and present a framework and use cases for designing activitycentric configurations in multi-device information systems. The dissertation presents two major contributions: First, I introduce the term configuration work as an abstract analytical unit that describes and captures the problems and challenges of distributed interaction. Using both empirical data and related work, I argue that configuration work is composed of: curation work, task resumption lag, mobility work, physical handling and articulation work. Using configuration work as a problem description, I operationalize Activity Theory and Activity-Centric Computing to mitigate and reduce configuration work in distributed interaction. By allowing users to interact with computational representations of their real-world activities, creating complex multi-user device ecologies and switching between cross-device information configurations will be more efficient, more effective and provide better support for users’ mental model about a multi-user and multi-device environment. Using activity configuration as a central concept, I introduce a framework that describes how digital representations of human activity can be distributed, fragmented and used across multiple devices and users. Second, I present a technical infrastructure and four applications that apply the concepts of activity configuration. The infrastructure is a general purpose platform for the design, development and deployment of distributed activitycentric systems. The infrastructure simplifies the development of activity-centric systems as it presents complex distributed computing processes and services into high level activity system abstractions. Using this infrastructure and conceptual framework, I describe four fully working applications that explore multi-device interactions in two specific domains: office work and hospital work. The systems are evaluated and tested with end-users in a number of lab and field studies
    corecore