181 research outputs found

    Collecting Shared Experiences through Lifelogging: Lessons Learned

    Get PDF
    The emergence of widespread pervasive sensing, personal recording technologies, and systems for the quantified self are creating an environment in which one can capture fine-grained activity traces. Such traces have wide applicability in domains such as human memory augmentation, behavior change, and healthcare. However, obtaining these traces for research is nontrivial, especially those containing photographs of everyday activities. To source data for their own work, the authors created an experimental setup in which they collected detailed traces of a group of researchers over 2.75 days. They share their experiences of this process and present a series of lessons learned for other members of the research community conducting similar studies

    Household occupancy monitoring using electricity meters

    Full text link
    Occupancy monitoring (i.e. sensing whether a building or room is currently occupied) is required by many building au-tomation systems. An automatic heating system may, for ex-ample, use occupancy data to regulate the indoor temperature. Occupancy data is often obtained through dedicated hardware such as passive infrared sensors and magnetic reed switches. In this paper, we derive occupancy information from elec-tric load curves measured by off-the-shelf smart electricity meters. Using the publicly available ECO dataset, we show that supervised machine learning algorithms can extract occu-pancy information with an accuracy between 83 % and 94%. To this end we use a comprehensive feature set containing 35 features. Thereby we found that the inclusion of features that capture changes in the activation state of appliances provides the best occupancy detection accuracy

    The survey on Near Field Communication

    Get PDF
    PubMed ID: 26057043Near Field Communication (NFC) is an emerging short-range wireless communication technology that offers great and varied promise in services such as payment, ticketing, gaming, crowd sourcing, voting, navigation, and many others. NFC technology enables the integration of services from a wide range of applications into one single smartphone. NFC technology has emerged recently, and consequently not much academic data are available yet, although the number of academic research studies carried out in the past two years has already surpassed the total number of the prior works combined. This paper presents the concept of NFC technology in a holistic approach from different perspectives, including hardware improvement and optimization, communication essentials and standards, applications, secure elements, privacy and security, usability analysis, and ecosystem and business issues. Further research opportunities in terms of the academic and business points of view are also explored and discussed at the end of each section. This comprehensive survey will be a valuable guide for researchers and academicians, as well as for business in the NFC technology and ecosystem.Publisher's Versio

    Modeling Gaze-Guided Narratives for Outdoor Tourism

    Get PDF
    Many outdoor spaces have hidden stories connected with them that can be used to enrich a tourist’s experience. These stories are often related to environmental features which are far from the user and far apart from each other. Therefore they are difficult to explore by locomotion, but can be visually explored from a vantage point. Telling a story from a vantage point is challenging since the system must ensure that the user can identify the relevant features in the environment. Gaze-guided narratives are an interaction concept that helps in such situations by telling a story dynamically depending on the user’s current and previous gaze on a panorama. This chapter suggests a formal modeling approach for gaze-guided narratives, based on narrative mediation trees. The approach is illustrated with an example from the Swiss saga around ’Wilhelm Tell

    HCI at the boundary of work and life

    Get PDF
    The idea behind this Special Issue originates in a workshop on HCI and CSCW research related to work and non-work-life balance organized in conjunction with the ECSCW 2013 conference by the issue co-editors. Fifteen papers were originally submitted for possible inclusion in this Special Issue, and four papers were finally accepted for publication after two rounds of rigorous peer review. The four accepted papers explore, in different ways, HCI at the boundary of work and life. In this editorial, we offer a description of the overall theme and rationale for the Special Issue, including an introduction on the topic relevance and background, and a reflection on how the four accepted papers further current research and debate on the topic

    Computational Analysis of Urban Places Using Mobile Crowdsensing

    Get PDF
    In cities, urban places provide a socio-cultural habitat for people to counterbalance the daily grind of urban life, an environment away from home and work. Places provide an environment for people to communicate, share perspectives, and in the process form new social connections. Due to the active role of places to the social fabric of city life, it is important to understand how people perceive and experience places. One fundamental construct that relates place and experience is ambiance, i.e., the impressions we ubiquitously form when we go out. Young people are key actors of urban life, specially at night, and as such play an equal role in co-creating and appropriating the urban space. Understanding how places and their youth inhabitants interact at night is a relevant urban issue. Until recently, our ability to assess the visual and perceptual qualities of urban spaces and to study the dynamics surrounding youth experiences in those spaces have been limited partly due to the lack of quantitative data. However, the growth of computational methods and tools including sensor-rich mobile devices, social multimedia platforms, and crowdsourcing tools have opened ways to measure urban perception at scale, and to deepen our understanding of nightlife as experienced by young people. In this thesis, as a first contribution, we present the design, implementation and computational analysis of four mobile crowdsensing studies involving youth populations from various countries to understand and infer phenomena related to urban places and people. We gathered a variety of explicit and implicit crowdsourced data including mobile sensor data and logs, survey responses, and multimedia content (images and videos) from hundreds of crowdworkers and thousands of users of mobile social networks. Second, we showed how crowdsensed images can be used for the computational characterization and analysis of urban perception in indoor and outdoor places. For both place types, urban perception impressions were elicited for several physical and psychological constructs using online crowdsourcing. Using low-level and deep learning features extracted from images, we automatically inferred crowdsourced judgments of indoor ambiance with a maximum R2 of 0.53 and outdoor perception with a maximum R2 of 0.49. Third, we demonstrated the feasibility to collect rich contextual data to study the physical mobility, activities, ambiance context, and social patterns of youth nightlife behavior. Fourth, using supervised machine learning techniques, we automatically classified drinking behavior of young people in an urban, real nightlife setting. Using features extracted from mobile sensor data and application logs, we obtained an overall accuracy of 76.7%. While this thesis contributes towards understanding urban perception and youth nightlife patterns in specific contexts, our research also contributes towards the computational understanding of urban places at scale with high spatial and temporal resolution, using a combination of mobile crowdsensing, social media, machine learning, multimedia analysis, and online crowdsourcing

    Designing gaze-based interaction for pervasive public displays

    Get PDF
    The last decade witnessed an increasing adoption of public interactive displays. Displays can now be seen in many public areas, such as shopping malls, and train stations. There is also a growing trend towards using large public displays especially in airports, urban areas, universities and libraries. Meanwhile, advances in eye tracking and visual computing promise straightforward integration of eye tracking on these displays for both: 1) monitoring the user's visual behavior to evaluate different aspects of the display, such as measuring the visual attention of passersby, and for 2) interaction purposes, such as allowing users to provide input, retrieve content, or transfer data using their eye movements. Gaze is particularly useful for pervasive public displays. In addition to being natural and intuitive, eye gaze can be detected from a distance, bringing interactivity to displays that are physically unreachable. Gaze reflects the user's intention and visual interests, and its subtle nature makes it well-suited for public interactions where social embarrassment and privacy concerns might hinder the experience. On the downside, eye tracking technologies have traditionally been developed for desktop settings, where a user interacts from a stationary position and for a relatively long period of time. Interaction with public displays is fundamentally different and hence poses unique challenges when employing eye tracking. First, users of public displays are dynamic; users could approach the display from different directions, and interact from different positions or even while moving. This means that gaze-enabled displays should not expect users to be stationary at a specific position, but instead adapt to users' ever-changing position in front of the display. Second, users of public displays typically interact for short durations, often for a few seconds only. This means that contrary to desktop settings, public displays cannot afford requiring users to perform time-consuming calibration prior to interaction. In this publications-based dissertation, we first report on a review of challenges of interactive public displays, and discuss the potential of gaze in addressing these challenges. We then showcase the implementation and in-depth evaluation of two applications where gaze is leveraged to address core problems in today's public displays. The first presents an eye-based solution, EyePACT, that tackles the parallax effect which is often experienced on today's touch-based public displays. We found that EyePACT significantly improves accuracy even with varying degrees of parallax. The second is a novel multimodal system, GTmoPass, that combines gaze and touch input for secure user authentication on public displays. GTmoPass was found to be highly resilient to shoulder surfing, thermal attacks and smudge attacks, thereby offering a secure solution to an important problem on public displays. The second part of the dissertation explores specific challenges of gaze-based interaction with public displays. First, we address the user positioning problem by means of active eye tracking. More specifically, we built a novel prototype, EyeScout, that dynamically moves the eye tracker based on the user's position without augmenting the user. This, in turn, allowed us to study and understand gaze-based interaction with public displays while walking, and when approaching the display from different positions. An evaluation revealed that EyeScout is well perceived by users, and improves the time needed to initiate gaze interaction by 62% compared to state-of-the-art. Second, we propose a system, Read2Calibrate, for calibrating eye trackers implicitly while users read text on displays. We found that although text-based calibration is less accurate than traditional methods, it integrates smoothly while reading and thereby more suitable for public displays. Finally, through our prototype system, EyeVote, we show how to allow users to select textual options on public displays via gaze without calibration. In a field deployment of EyeVote, we studied the trade-off between accuracy and selection speed when using calibration-free selection techniques. We found that users of public displays value faster interactions over accurate ones, and are willing to correct system errors in case of inaccuracies. We conclude by discussing the implications of our findings on the design of gaze-based interaction for public displays, and how our work can be adapted for other domains apart from public displays, such as on handheld mobile devices.In den letzten zehn Jahren wurden vermehrt interaktive Displays in öffentlichen Bereichen wie Einkaufszentren, Flughäfen und Bahnhöfen eingesetzt. Große öffentliche Displays finden sich zunehmend in städtischen Gebieten, beispielsweise in Universitäten und Bibliotheken. Fortschritte in der Eye-Tracking-Technologie und der Bildverarbeitung versprechen eine einfache Integration von Eye-Tracking auf diesen Displays. So kann zum einen das visuelle Verhalten der Benutzer verfolgt und damit ein Display nach verschiedenen Aspekten evaluiert werden. Zum anderen eröffnet Eye-Tracking auf öffentlichen Displays neue Interaktionsmöglichkeiten. Blickbasierte Interaktion ist besonders nützlich für Bildschirme im allgegenwärtigen öffentlichen Raum. Der Blick bietet mehr als eine natürliche und intuitive Interaktionsmethode: Blicke können aus der Ferne erkannt und somit für Interaktion mit sonst unerreichbaren Displays genutzt werden. Aus der Interaktion mit dem Blick (Gaze) lassen sich Absichten und visuelle Interessen der Benutzer ableiten. Dadurch eignet es sich besonders für den öffentlichen Raum, wo Nutzer möglicherweise Datenschutzbedenken haben könnten oder sich bei herkömmlichen Methoden gehemmt fühlen würden in der Öffentlichkeit mit den Displays zu interagieren. Dadurch wird ein uneingeschränktes Nutzererlebnis ermöglicht. Eye-Tracking-Technologien sind jedoch in erster Linie für Desktop-Szenarien entwickelt worden, bei denen ein Benutzer für eine relativ lange Zeitspanne in einer stationären Position mit dem System interagiert. Die Interaktion mit öffentlichen Displays ist jedoch grundlegend anders. Daher gilt es völlig neuartige Herausforderungen zu bewältigen, wenn Eye-Tracking eingesetzt wird. Da sich Nutzer von öffentlichen Displays bewegen, können sie sich dem Display aus verschiedenen Richtungen nähern und sogar währenddessen mit dem Display interagieren. Folglich sollten "Gaze-enabled Displays" nicht davon ausgehen, dass Nutzer sich stets an einer bestimmten Position befinden, sondern sollten sich an die ständig wechselnde Position des Nutzers anpassen können. Zum anderen interagieren Nutzer von öffentlichen Displays üblicherweise nur für eine kurze Zeitspannen von ein paar Sekunden. Eine zeitaufwändige Kalibrierung durch den Nutzer vor der eigentlichen Interaktion ist hier im Gegensatz zu Desktop-Szenarien also nicht adäquat. Diese kumulative Dissertation überprüft zunächst die Herausforderungen interaktiver öffentlicher Displays und diskutiert das Potenzial von blickbasierter Interaktion zu deren Bewältigung. Anschließend wird die Implementierung und eingehende Evaluierung von zwei beispielhaften Anwendungen vorgestellt, bei denen Nutzer durch den Blick mit öffentlichen Displays interagieren. Daraus ergeben sich weitere greifbare Vorteile der blickbasierten Interaktion für öffentliche Display-Kontexte. Bei der ersten Anwendung, EyePACT, steht der Parallaxeneffekt im Fokus, der heutzutage häufig ein Problem auf öffentlichen Displays darstellt, die über Berührung (Touch) gesteuert werden. Die zweite Anwendung ist ein neuartiges multimodales System, GTmoPass, das Gaze- und Touch-Eingabe zur sicheren Benutzerauthentifizierung auf öffentlichen Displays kombiniert. GTmoPass ist sehr widerstandsfähig sowohl gegenüber unerwünschten fremden Blicken als auch gegenüber sogenannten thermischen Angriffen und Schmierangriffen. Es bietet damit eine sichere Lösung für ein wichtiges Sicherheits- und Datenschutzproblem auf öffentlichen Displays. Der zweite Teil der Dissertation befasst sich mit spezifischen Herausforderungen der Gaze-Interaktion mit öffentlichen Displays. Zuerst wird der Aspekt der Benutzerpositionierung durch aktives Eye-Tracking adressiert. Der neuartige Prototyp EyeScout bewegt den Eye-Tracker passend zur Position des Nutzers, ohne dass dieser dafür mit weiteren Geräten oder Sensoren ausgestattet werden muss. Dies ermöglicht blickbasierte Interaktion mit öffentlichen Displays auch in jenen Situationen zu untersuchen und zu verstehen, in denen Nutzer in Bewegung sind und sich dem Display von verschiedenen Positionen aus nähern. Zweitens wird das System Read2Calibrate präsentiert, das Eye-Tracker implizit kalibriert, während Nutzer Texte auf Displays lesen. Der Prototyp EyeVote zeigt, wie man die Auswahl von Textantworten auf öffentlichen Displays per Blick ohne Kalibrierung ermöglichen kann. In einer Feldstudie mit EyeVote wird der Kompromiss zwischen Genauigkeit und Auswahlgeschwindigkeit unter der Verwendung kalibrierungsfreier Auswahltechniken untersucht. Die Implikationen der Ergebnisse für das Design von blickbasierter Interaktion öffentlicher Displays werden diskutiert. Abschließend wird erörtert wie die verwendete Methodik auf andere Bereiche, z.B. auf mobilie Geräte, angewendet werden kann

    Device-Free Localization for Human Activity Monitoring

    Get PDF
    Over the past few decades, human activity monitoring has grabbed considerable research attentions due to greater demand for human-centric applications in healthcare and assisted living. For instance, human activity monitoring can be adopted in smart building system to improve the building management as well as the quality of life, especially for the elderly people who are facing health deterioration due to aging factor, without neglecting the important aspects such as safety and energy consumption. The existing human monitoring technology requires additional sensors, such as GPS, PIR sensors, video camera, etc., which incur cost and have several drawbacks. There exist various solutions of using other technologies for human activity monitoring in a smartly controlled environment, either device-assisted or device-free. A radio frequency (RF)-based device-free indoor localization, known as device-free localization (DFL), has attracted a lot of research effort in recent years due its simplicity, low cost, and compatibility with the existing hardware equipped with RF interface. This chapter introduces the potential of RF signals, commonly adopted for wireless communications, as sensing tools for DFL system in human activity monitoring. DFL is based on the concept of radio irregularity where human existence in wireless communication field may interfere and change the wireless characteristics
    corecore