44 research outputs found

    Investigating the Perceptibility of Smartphone Notifications and Methods for Context-Aware Data Assessment in Experience Sampling Studies

    Get PDF
    Eine zentrale Aufgabe in der Mensch-Maschine-Interaktion ist die Durchführung von Nutzerstudien. Diese ermöglichen einen tieferen Einblick in das Verhalten von Nutzern, dienen aber auch dazu, Labels zum Annotieren von Daten zu sammeln. Die traditionelle Methode zum Erfassen von subjektivem Feedback ist die Experience Sampling Method (ESM). Durch das Beantworten von Fragebögen stellen Probanden nicht nur Informationen über sich selbst, sondern auch über ihre Umgebung zur Verfügung. Außerdem können ihre Antworten als Label für Daten, welche zeitgleich erhoben wurden, dienen. Inzwischen sind Smartphones zur Hauptplattform zum Durchführen von ESM Studien geworden. Sie werden genutzt, um ESM-Abfragen in Form von Benachrichtigungen auszusenden, um die gesammelten Labels zu speichern und um sie den Sensordaten zuzuweisen, welche im Hintergrund gesammelt wurden. In ESM-Studien wird angestrebt, möglichst viele und qualitativ hochwertige Daten zu sammeln. Um dieses Ziel zu erreichen, bedarf es einer großen Menge sorgfältig beantworteter ESM-Abfragen. Die Probanden wiederum wollen in der Regel so wenig Abfragen wie möglich erhalten. Es ist notwendig, einen Kompromiss zwischen Abfragehäufigkeit und Probandenzufriedenheit zu finden. Beim Erstellen von ESM-Studien ergeben sich verschiedene Herausforderungen. Einerseits sind diese mit der ESM-App und deren Funktionalität verbunden. Andererseits stehen sie aber auch mit dem Ausliefern von ESM-Abfragen und deren Wahrnehmung durch den Nutzer im Zusammenhang. ESM-Abfragen müssen in Situationen ausgesandt werden, welche für den Studiendesigner von Interesse sind. Dies bedarf eines akkuraten Erkennungssystems, welches in die ESM-App eingebunden werden muss. Sowohl die Anzahl und Häufigkeit der Abfragen als auch die Länge des Feedback-Fragebogens sollten auf ein Minimum reduziert werden. Beides sind Herausforderungen, welche die ESM-App, welche zur Durchführung der Studie genutzt wird, adressieren muss. Um das Erstellen von ESM-Anwendungen zu erleichtern, ist es empfehlenswert, auf ein primäres Entwicklungswerkzeug zurückzugreifen. Im besten Fall ist solch ein Werkzeug einfach zu nutzen und bietet Zugriff auf eine weitreichende Menge an Sensoren, aus denen kontextuelle Informationen abgeleitet werden können - beispielsweise, um ereignisbasiert Abfragen auszusenden. Im Rahmen dieser Dissertation stellen wir ESMAC vor, den ESM App Configurator. ESMAC stellt verschiedene Abfragetypen zur Verfügen, ebenso wie verschiedene Einstellungen, um die Anzahl an Abfragen pro Tag zu begrenzen (inquiry limit) oder um ein abfragefreies Zeitfenster zwischen zwei aufeinanderfolgenden Abfragen zu definieren (inter-notification time). Zudem bietet es Zugriff auf eine Vielzahl an Sensormesswerten und -Informationen.Diese Werte werden automatisch erfasst und benötigen keine Abfrage vom Nutzer, was zu einer reduzierten Fragebogenlänge führen kann. Um Informationen in Situationen zu sammeln, welche für den Studiendesigner von Interesse sind, bietet ESMAC eine Auswahl an ereignisbasierten Abfragen. Ereignisbasierte Abfragen fanden bereits in diversen ESM-Studien Anwendung. Dennoch wurde ihre Nützlichkeit bisher nicht explizit untersucht. Zwei Faktoren, welche für verschiedene Forschungsbereiche relevant sind, sind Ortswechsel und Aktivitätsänderungen des Nutzers. Diese können beispielsweise für die Erkennung der Unterbrechbarkeit eines Nutzers genutzt werden oder zum Überwachen von Zustandsänderungen bei Patienten, welche unter affektiven Störungen leiden. Am Beispiel einer Studie, welche auf die Erfassung dieser beiden Faktoren ausgerichtet ist, zeigen wir, dass ereignisbasierte Abfragen nützlich sind, vor allem wenn die ausgewählten ereignisbasierten Abfragen (hier: Ortswechsel) im Zusammenhang mit den zu erfassenden Daten stehen (hier: Feedback über die Mobilität und Aktivität des Nutzers). Die Erfassung von Datenlabels bedarf nicht nur ereignisbasierter Abfragen, sondern auch zeitnaher Antworten von den Probanden, um die Labels möglichst akkurat den gesammelten Daten zuweisen zu können. Hierzu ist es notwendig, dass die Probanden die eingehenden Abfragen rechtzeitig bemerken. Abfragen werden unter Umständen nicht wahrgenommen, weil eine zu unauffällige Benachrichtigungsmodalität gewählt wurde oder weil die ESM-Abfragen in einem überfüllten Notification Drawer des Smartphones untergehen. Die Wahrnehmbarkeit von Benachrichtigungen wird durch verschiedene kontextuelle Faktoren beeinflusst, z.B. die Position des Smartphones, den aktuellen Ort oder die (soziale) Aktivität des Nutzers. Aber auch inhaltliche Eigenschaften wie die empfundene Wichtigkeit einer Benachrichtigung können einen Einfluss haben. Als Grundlage für spätere Forschung untersuchen wir Methoden, um diese Einflussfaktoren zu erfassen. Zuerst stellen wir eine Methode zur Position-Transition-Korrektur vor, welche die Erkennung der aktuellen Smartphone-Position verbessert. Diese Methode basiert auf der Annahme, dass jeder Wechsel von einer Position zur nächsten über das Halten des Geräts in der Hand erfolgt. Als nächstes untersuchen wir verschiedene Methoden zur Ortserfassung, unter Achtung der Privatsphäre des Benutzers. Wir stellen vor, wie WLAN-Informationen und Ortstypen genutzt werden können, um den Aufenthaltsort eines Nutzers zu beschreiben und Ortswechsel zu erkennen, ohne den exakten Standort abzuspeichern. Basierend auf dem Ortstypen präsentieren wir eine Methode, um abzuschätzen, ob ein Smartphone-Nutzer in Begleitung ist. Abschließend untersuchen wir noch Smartphone-Features, welche mit der empfundenen Wichtigkeit einer Benachrichtigung in Zusammenhang stehen könnten. Nachdem wir Methoden zum Erfassen von Einflussfaktoren untersucht haben, betrachten wir Zusammenhänge zwischen der Wahrnehmung von eingehenden Benachrichtigungen und verschiedenen Benachrichtigungsmodalitäten. Diese Betrachtung erfolgt unter Berücksichtigung (a) der aktuellen Position des Smartphones und (b) des aktuellen Ortes des Smartphone-Nutzers und möglicher ortsbasierter Aktivitäten. Wir stellen eine Studie vor, welche Aufschluss darüber gibt, wie angenehm und wahrnehmbar verschiedene Benachrichtigungsmodalitäten sind - abhängig davon, wo das Smartphone vom Nutzer aufbewahrt wird. Für den aktuellen Ort und ortsbezogene Aktivitäten stellen wir passende Benachrichtigungsmodalitäten vor, über welche wir im Rahmen einer Onlineumfrage und einer Laborstudie Rückmeldung erhalten haben. Abschließend erstellen und evaluieren wir verschiedene Designs, um wichtige Benachrichtigungen - welche ESM-Abfragen einschließen - hervorzuheben, indem ihre Sichtbarkeit im Notification Drawer erhöht wird. Diese Designs basieren auf Feedback von Interviewprobanden als auch auf Erkenntnissen aus der Literatur. Wir stellen Eigenschaften von Benachrichtigungsdesigns vor, welche von Probanden einer Onlineumfrage als angenehm und nützlich empfunden wurden. Zudem empfehlen wir auch Kombinationen verschiedener Designeigenschaften. Die Beiträge dieser Dissertation können wie folgt zusammengefasst werden: - Vorstellung eines Tools, um kontextsensitive ESM-Apps zu erstellen - Bestätigung der Relevanz von ereignisbasierten Abfragen am Beispiel einer ESM-Studie mit Fokus auf Ortswechsel und Aktivitätsänderungen - Vorstellung eines Position-Transition-Korrekturmechanismus zum Verbessern der Erkennung der Smartphone-Position - Vorstellung zweier Methoden zur Ortserfassung ohne konkrete Offenlegung und Speicherung des konkreten Aufenthaltsortes - Vorstellung einer ortsbasierten Methode zum Abschätzen, ob sich ein Smartphone-Nutzer in Begleitung befindet oder nicht - Vorstellen von vier Typen von Wichtigkeit und von Smartphone-Features, welche mit der empfundenen Wichtigkeit von Benachrichtigungen in Zusammenhang stehen - Empfehlungen für die Auswahl von Benachrichtigungsmodalitäten abhängig von der (a) Smartphone-Position als auch (b) des aktuellen Ortes und möglicher ortsbasierter Aktivitäten - Empfehlungen für Designanpassungen von Smartphone-Benachrichtigungen, um solche von höherer Wichtigkeit hervorzuhebe

    SmartLED: Smartphone-based covert channels leveraging the notification LED

    Get PDF
    The widespread adoption of smartphones make them essential in daily routines. Thus, they can be used to create a covert channel without raising suspicions. To avoid detection, networkless communications are preferred. In this paper, we propose SmartLED, a mechanism to build covert channels leveraging a widely available smartphone feature - its notification LED. The secret is encoded through LED blinks using Manhattan encoding. SmartLED is assessed in real-world indoor and outdoor scenarios, considering different distances up to 5 meters. Our results show that the best performance is achieved in dark settings - 34.8 s. are needed to exfiltrate a 7-byte password to a distance of 1 m. Remarkably, distance does not cause a great impact on effective transmission time and shorter blinks do not lead to substantially greater transmission errorsThis work was supported by MINECO grant TIN2016-79095-C2-2-R (SMOG-DEV), PID2019-111429RB-C21 (ODIO), P2018/TCS4566 (CYNAMON-CM) funded with European FEDER funds and CAVTIONS-CM-UC3M funded by UC3M and the Government of Madrid (CAM)

    Why are smartphones disruptive? An empirical study of smartphone use in real-life contexts

    Get PDF
    Notifications are one of the core functionalities of smartphones. Previous research suggests they can be a major disruption to the professional and private lives of users. This paper presents evidence from a mixed-methods study using first-person wearable video cameras, comprising 200 h of audio-visual first-person, and self-confrontation interview footage with 1130 unique smartphone interactions (N = 37 users), to situate and analyse the disruptiveness of notifications in real-world contexts. We show how smartphone interactions are driven by a complex set of routines and habits users develop over time. We furthermore observe that while the duration of interactions varies, the intervals between interactions remain largely invariant across different activity and location contexts, and for being alone or in the company of others. Importantly, we find that 89% of smartphone interactions are initiated by users, not by notifications. Overall this suggests that the disruptiveness of smartphones is rooted within learned user behaviours, not devices

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Smartphones as steady companions: device use in everyday life and the economics of attention

    Get PDF
    This thesis investigates smartphone use in naturally occurring contexts with a dataset comprising 200 hours of audio-visual first-person recordings from wearable cameras, and self-confrontation interview video footage (N = 41 users). The situated context in which smartphone use takes place has often been overlooked because of the technical difficulty of capturing context of use, actual action of users, and their subjective experience simultaneously. This research project contributes to filling this gap, with a detailed, mixed-methods analysis of over a thousand individual phone engagement behaviours (EB). We observe that (a) the smartphone is a key structuring element in the flow of daily activities. Participants report complex strategies on how they manage engaging with or avoiding their devices. (b) Unexpectedly, we find that the majority of EB (89%) are initiated by users, not devices; users engage with the phone roughly every five minutes regardless of the context they are in. (c) A large portion of EB seems to stem from contextual cues and an unconscious urge to pick up the device, even when there is no clear reason to do so. d) Participants are surprised about, and often unhappy with how frequently they mindlessly reach for the phone. Our in-depth analysis unveils several overlapping layers of motivations and triggers driving EB. Monitoring incoming notifications, managing time use, responding to social pressures, actually completing a task with the phone, design factors, unconscious urges, as well as the accessibility of the device, and most importantly its affordance for distraction all contribute to picking up the phone. This user drive for EB is used by providers to feed the attention economy. So far, keeping the smartphone outside of the visual field and immediate reach has appeared as the only efficient strategy to prevent overuse

    Extending head-up displays

    Get PDF
    Drivers consume an increasing amount of information while driving. The information is accessed on the in-car displays but also on personal devices such as the smartphone. Head-up displays are designed for a safe uptake of additional visual information while driving but their benefits are limited by the small display space. This motivates academia and industry to advance the head-up to the so-called windshield display. A windshield display will provide an extended display space, which largely or entirely covers the driver’s visual field through the windshield, as well as 3D and depth perception. Technologically, they are not yet feasible, but, thanks to steady advancements they will become available in the future. Extending a small 2D to a large 3D space requires a rethinking of the entire user interface. The windshield display opens up new opportunities for the type and amount of information, as well as for the way it is presented – ranging up to full augmented reality but it also raises concerns about a distracted driver. The core question of this thesis is whether such an extension is reasonable and desirable – meaning if there are convincing arguments and use cases which justify the potential risk of distraction. This thesis presents our research about the risks and benefits of the transition from a head-up to a windshield display. Thus, we explore the potentials and examine the safety risks and benefits as well as the drivers’ satisfaction of various display aspects. We developed a design space that shows how the new size and depth possibilities create new, or interrelate with existing, design factors. New design opportunities arise and suggest a redesign of existing functionality but also the integration of new content. We researched the information content that could be displayed on a windshield display and asked drivers what content they need and personally desire. We thereby obtained an extensive list of use cases and applications. We approached the question of where such content should be displayed, given the large 3D space. To enable the design of safe interfaces, we first examined the driver’s visual perception across the windshield and identified locations that promote information recognition, particularly in the new peripheral area. Simultaneously, we examined the different ways of placing and stabilizing the content. We compared the traditional screen-fixed with world-fixed (augmented reality) and head-stabilized placement methods in terms of user satisfaction, understandability and safety. The gained knowledge about the locations that support information uptake and about the best ways of placing content was merged into a layout concept that subdivides the driver’s view into several information areas. We also incorporated the drivers’ preferences into this design process and compared their personalized layouts with our vision-based layout concept. We assessed the safety of both layout versions and present a revised concept. We close this thesis by reflecting on other trends that may interrelate with the windshield display, namely autonomous driving and augmented reality consumer devices. We look at recent advancements in realizing windshield displays and endeavor a prediction of future developments in this area

    Acoustic-channel attack and defence methods for personal voice assistants

    Get PDF
    Personal Voice Assistants (PVAs) are increasingly used as interface to digital environments. Voice commands are used to interact with phones, smart homes or cars. In the US alone the number of smart speakers such as Amazon’s Echo and Google Home has grown by 78% to 118.5 million and 21% of the US population own at least one device. Given the increasing dependency of society on PVAs, security and privacy of these has become a major concern of users, manufacturers and policy makers. Consequently, a steep increase in research efforts addressing security and privacy of PVAs can be observed in recent years. While some security and privacy research applicable to the PVA domain predates their recent increase in popularity and many new research strands have emerged, there lacks research dedicated to PVA security and privacy. The most important interaction interface between users and a PVA is the acoustic channel and acoustic channel related security and privacy studies are desirable and required. The aim of the work presented in this thesis is to enhance the cognition of security and privacy issues of PVA usage related to the acoustic channel, to propose principles and solutions to key usage scenarios to mitigate potential security threats, and to present a novel type of dangerous attack which can be launched only by using a PVA alone. The five core contributions of this thesis are: (i) a taxonomy is built for the research domain of PVA security and privacy issues related to acoustic channel. An extensive research overview on the state of the art is provided, describing a comprehensive research map for PVA security and privacy. It is also shown in this taxonomy where the contributions of this thesis lie; (ii) Work has emerged aiming to generate adversarial audio inputs which sound harmless to humans but can trick a PVA to recognise harmful commands. The majority of work has been focused on the attack side, but there rarely exists work on how to defend against this type of attack. A defence method against white-box adversarial commands is proposed and implemented as a prototype. It is shown that a defence Automatic Speech Recognition (ASR) can work in parallel with the PVA’s main one, and adversarial audio input is detected if the difference in the speech decoding results between both ASR surpasses a threshold. It is demonstrated that an ASR that differs in architecture and/or training data from the the PVA’s main ASR is usable as protection ASR; (iii) PVAs continuously monitor conversations which may be transported to a cloud back end where they are stored, processed and maybe even passed on to other service providers. A user has limited control over this process when a PVA is triggered without user’s intent or a PVA belongs to others. A user is unable to control the recording behaviour of surrounding PVAs, unable to signal privacy requirements and unable to track conversation recordings. An acoustic tagging solution is proposed aiming to embed additional information into acoustic signals processed by PVAs. A user employs a tagging device which emits an acoustic signal when PVA activity is assumed. Any active PVA will embed this tag into their recorded audio stream. The tag may signal a cooperating PVA or back-end system that a user has not given a recording consent. The tag may also be used to trace when and where a recording was taken if necessary. A prototype tagging device based on PocketSphinx is implemented. Using Google Home Mini as the PVA, it is demonstrated that the device can tag conversations and the tagging signal can be retrieved from conversations stored in the Google back-end system; (iv) Acoustic tagging provides users the capability to signal their permission to the back-end PVA service, and another solution inspired by Denial of Service (DoS) is proposed as well for protecting user privacy. Although PVAs are very helpful, they are also continuously monitoring conversations. When a PVA detects a wake word, the immediately following conversation is recorded and transported to a cloud system for further analysis. An active protection mechanism is proposed: reactive jamming. A Protection Jamming Device (PJD) is employed to observe conversations. Upon detection of a PVA wake word the PJD emits an acoustic jamming signal. The PJD must detect the wake word faster than the PVA such that the jamming signal still prevents wake word detection by the PVA. An evaluation of the effectiveness of different jamming signals and overlap between wake words and the jamming signals is carried out. 100% jamming success can be achieved with an overlap of at least 60% with a negligible false positive rate; (v) Acoustic components (speakers and microphones) on a PVA can potentially be re-purposed to achieve acoustic sensing. This has great security and privacy implication due to the key role of PVAs in digital environments. The first active acoustic side-channel attack is proposed. Speakers are used to emit human inaudible acoustic signals and the echo is recorded via microphones, turning the acoustic system of a smartphone into a sonar system. The echo signal can be used to profile user interaction with the device. For example, a victim’s finger movement can be monitored to steal Android unlock patterns. The number of candidate unlock patterns that an attacker must try to authenticate herself to a Samsung S4 phone can be reduced by up to 70% using this novel unnoticeable acoustic side-channel

    Making Graphical Information Accessible Without Vision Using Touch-based Devices

    Get PDF
    Accessing graphical material such as graphs, figures, maps, and images is a major challenge for blind and visually impaired people. The traditional approaches that have addressed this issue have been plagued with various shortcomings (such as use of unintuitive sensory translation rules, prohibitive costs and limited portability), all hindering progress in reaching the blind and visually-impaired users. This thesis addresses aspects of these shortcomings, by designing and experimentally evaluating an intuitive approach —called a vibro-audio interface— for non-visual access to graphical material. The approach is based on commercially available touch-based devices (such as smartphones and tablets) where hand and finger movements over the display provide position and orientation cues by synchronously triggering vibration patterns, speech output and auditory cues, whenever an on-screen visual element is touched. Three human behavioral studies (Exp 1, 2, and 3) assessed usability of the vibro-audio interface by investigating whether its use leads to development of an accurate spatial representation of the graphical information being conveyed. Results demonstrated efficacy of the interface and importantly, showed that performance was functionally equivalent with that found using traditional hardcopy tactile graphics, which are the gold standard of non-visual graphical learning. One limitation of this approach is the limited screen real estate of commercial touch-screen devices. This means large and deep format graphics (e.g., maps) will not fit within the screen. Panning and zooming operations are traditional techniques to deal with this challenge but, performing these operations without vision (i.e., using touch) represents several computational challenges relating both to cognitive constraints of the user and technological constraints of the interface. To address these issues, two human behavioral experiments were conducted, that assessed the influence of panning (Exp 4) and zooming (Exp 5) operations in non-visual learning of graphical material and its related human factors. Results from experiments 4 and 5 indicated that the incorporation of panning and zooming operations enhances the non-visual learning process and leads to development of more accurate spatial representation. Together, this thesis demonstrates that the proposed approach —using a vibro-audio interface— is a viable multimodal solution for presenting dynamic graphical information to blind and visually-impaired persons and supporting development of accurate spatial representations of otherwise inaccessible graphical materials

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data
    corecore