29 research outputs found

    Placemaking in the Digital Media Era

    Get PDF
    In recent years, placemaking has emerged as a cooperative process for improving urban environments. The design of physical space is an essential aspect for enabling changes in these environments. However, as people and communities can now interact with places locally and globally in real time, placemaking processes can be influenced and enabled by digital media. This paper argues that placemaking initiatives need to consider the new embedded reality of physical and digita

    Application diversity in open display networks

    Get PDF
    We envision that future public display networks will be more interactive and open to applications from third parties similar to what we already have with smartphones. This paper investigates the application landscape for interactive public displays aiming to understand what would be the design and usage space for this type of applications. In particular, we explore people’s perceptions and expectations regarding the diversity of applications that may emerge in future application ecosystems for public displays. We have devised a research methodology anchored on what is currently the rich and diverse range of applications in the mobile application market. We used a set of 75 mobile applications from Google Play application store and asked 72 participants about their relevance for public displays. The results showed that people had a clear preference for applications that disseminate content, and also that these preferences are affected by the type of location where the displays are deployed. These insights improve the understanding of the variables that may affect diversity in future display application ecosystems and inform the development of potential app stores in this context.Fundação para a Ciência e a Tecnologia (FCT

    Smart citizen sentiment dashboard: A case study into media architectural interfaces

    Get PDF
    In this paper we introduce the notion of media architectural interfaces (MAIs), which describe the relation between users engaging with dynamic content on media façades through tangible artifacts on street level. Firstly, we outline existing research concerned with public displays, urban screens and media facades, secondly we summarize related works that explore mediated urban interactions in connection with MAIs. We report on the technical set up of a field study, in which we deployed a novel tangible user interface (TUI), called the Smart Citizen Sentiment Dashboard (SCSD). This device gives citizens the opportunity to express their mood about local urban challenges. The input from this TUI is then instantly displayed on a very large (3700 sqm) media façade. The installation ran for three weeks during a media arts festival in Sao Paulo, Brazil. During this deployment period, we were able to gather data to help us understand the relationship between passers-by, participants, the TUI and the media façade. As a result we identified emergent behavior in the immediate space around the TUI and the wider urban space. The contribution this paper makes is in highlighting challenges in the design and deployment of large-scale media architectural interfaces. Categories and Subject Descriptors H.5 [Information interfaces and presentation] General Terms Design, Human Factors

    GazeCast: Using Mobile Devices to Allow Gaze-based Interaction on Public Displays

    Get PDF
    Gaze is promising for natural and spontaneous interaction with public displays, but current gaze-enabled displays require movement-hindering stationary eye trackers or cumbersome head-mounted eye trackers. We propose and evaluate GazeCast – a novel system that leverages users’ handheld mobile devices to allow gaze-based interaction with surrounding displays. In a user study (N = 20), we compared GazeCast to a standard webcam for gaze-based interaction using Pursuits. We found that while selection using GazeCast requires more time and physical demand, participants value GazeCast’s high accuracy and flexible positioning. We conclude by discussing how mobile computing can facilitate the adoption of gaze interaction with pervasive displays

    Evaluating a public display installation with game and video to raise awareness of Attention Deficit Hyperactivity Disorder

    Get PDF
    Networked Urban Screens offer new possibilities for public health education and awareness. An information video about Attention Deficit Hyperactivity Disorder (ADHD) was combined with a custom browser-based video game and successfully deployed on an existing research platform, Screens in the Wild (SitW). The SitW platform consists of 46-in. touchscreen or interactive displays, a camera, a microphone and a speaker, deployed at four urban locations in England. Details of the platform and software implementation of the multimedia content are presented. The game was based on a psychometric continuous performance test. In the gamified version of the test, players receive a score for correctly selected target stimuli, points being awarded in proportion to reaction time and penalties for missed or incorrect selections. High scores are shared between locations. Questions were embedded to probe self-awareness about ‘attention span’ in relation to playing the game, awareness of ADHD and Adult ADHD and increase in knowledge from the video. Results are presented on the level of public engagement with the game and video, deduced from play statistics, answers to the questions and scores obtained across the screen locations. Awareness of Adult ADHD specifically was similar to ADHD in general and knowledge increased overall for 93 % of video viewers. Furthermore, ratings of knowledge of Adult ADHD correlated positively with ADHD in general and positively with knowledge gain. Average scores varied amongst the sites but there was no significant correlation of question ratings with score. The challenge of interpreting user results from unsupervised platforms is discussed

    Designing gaze-based interaction for pervasive public displays

    Get PDF
    The last decade witnessed an increasing adoption of public interactive displays. Displays can now be seen in many public areas, such as shopping malls, and train stations. There is also a growing trend towards using large public displays especially in airports, urban areas, universities and libraries. Meanwhile, advances in eye tracking and visual computing promise straightforward integration of eye tracking on these displays for both: 1) monitoring the user's visual behavior to evaluate different aspects of the display, such as measuring the visual attention of passersby, and for 2) interaction purposes, such as allowing users to provide input, retrieve content, or transfer data using their eye movements. Gaze is particularly useful for pervasive public displays. In addition to being natural and intuitive, eye gaze can be detected from a distance, bringing interactivity to displays that are physically unreachable. Gaze reflects the user's intention and visual interests, and its subtle nature makes it well-suited for public interactions where social embarrassment and privacy concerns might hinder the experience. On the downside, eye tracking technologies have traditionally been developed for desktop settings, where a user interacts from a stationary position and for a relatively long period of time. Interaction with public displays is fundamentally different and hence poses unique challenges when employing eye tracking. First, users of public displays are dynamic; users could approach the display from different directions, and interact from different positions or even while moving. This means that gaze-enabled displays should not expect users to be stationary at a specific position, but instead adapt to users' ever-changing position in front of the display. Second, users of public displays typically interact for short durations, often for a few seconds only. This means that contrary to desktop settings, public displays cannot afford requiring users to perform time-consuming calibration prior to interaction. In this publications-based dissertation, we first report on a review of challenges of interactive public displays, and discuss the potential of gaze in addressing these challenges. We then showcase the implementation and in-depth evaluation of two applications where gaze is leveraged to address core problems in today's public displays. The first presents an eye-based solution, EyePACT, that tackles the parallax effect which is often experienced on today's touch-based public displays. We found that EyePACT significantly improves accuracy even with varying degrees of parallax. The second is a novel multimodal system, GTmoPass, that combines gaze and touch input for secure user authentication on public displays. GTmoPass was found to be highly resilient to shoulder surfing, thermal attacks and smudge attacks, thereby offering a secure solution to an important problem on public displays. The second part of the dissertation explores specific challenges of gaze-based interaction with public displays. First, we address the user positioning problem by means of active eye tracking. More specifically, we built a novel prototype, EyeScout, that dynamically moves the eye tracker based on the user's position without augmenting the user. This, in turn, allowed us to study and understand gaze-based interaction with public displays while walking, and when approaching the display from different positions. An evaluation revealed that EyeScout is well perceived by users, and improves the time needed to initiate gaze interaction by 62% compared to state-of-the-art. Second, we propose a system, Read2Calibrate, for calibrating eye trackers implicitly while users read text on displays. We found that although text-based calibration is less accurate than traditional methods, it integrates smoothly while reading and thereby more suitable for public displays. Finally, through our prototype system, EyeVote, we show how to allow users to select textual options on public displays via gaze without calibration. In a field deployment of EyeVote, we studied the trade-off between accuracy and selection speed when using calibration-free selection techniques. We found that users of public displays value faster interactions over accurate ones, and are willing to correct system errors in case of inaccuracies. We conclude by discussing the implications of our findings on the design of gaze-based interaction for public displays, and how our work can be adapted for other domains apart from public displays, such as on handheld mobile devices.In den letzten zehn Jahren wurden vermehrt interaktive Displays in öffentlichen Bereichen wie Einkaufszentren, Flughäfen und Bahnhöfen eingesetzt. Große öffentliche Displays finden sich zunehmend in städtischen Gebieten, beispielsweise in Universitäten und Bibliotheken. Fortschritte in der Eye-Tracking-Technologie und der Bildverarbeitung versprechen eine einfache Integration von Eye-Tracking auf diesen Displays. So kann zum einen das visuelle Verhalten der Benutzer verfolgt und damit ein Display nach verschiedenen Aspekten evaluiert werden. Zum anderen eröffnet Eye-Tracking auf öffentlichen Displays neue Interaktionsmöglichkeiten. Blickbasierte Interaktion ist besonders nützlich für Bildschirme im allgegenwärtigen öffentlichen Raum. Der Blick bietet mehr als eine natürliche und intuitive Interaktionsmethode: Blicke können aus der Ferne erkannt und somit für Interaktion mit sonst unerreichbaren Displays genutzt werden. Aus der Interaktion mit dem Blick (Gaze) lassen sich Absichten und visuelle Interessen der Benutzer ableiten. Dadurch eignet es sich besonders für den öffentlichen Raum, wo Nutzer möglicherweise Datenschutzbedenken haben könnten oder sich bei herkömmlichen Methoden gehemmt fühlen würden in der Öffentlichkeit mit den Displays zu interagieren. Dadurch wird ein uneingeschränktes Nutzererlebnis ermöglicht. Eye-Tracking-Technologien sind jedoch in erster Linie für Desktop-Szenarien entwickelt worden, bei denen ein Benutzer für eine relativ lange Zeitspanne in einer stationären Position mit dem System interagiert. Die Interaktion mit öffentlichen Displays ist jedoch grundlegend anders. Daher gilt es völlig neuartige Herausforderungen zu bewältigen, wenn Eye-Tracking eingesetzt wird. Da sich Nutzer von öffentlichen Displays bewegen, können sie sich dem Display aus verschiedenen Richtungen nähern und sogar währenddessen mit dem Display interagieren. Folglich sollten "Gaze-enabled Displays" nicht davon ausgehen, dass Nutzer sich stets an einer bestimmten Position befinden, sondern sollten sich an die ständig wechselnde Position des Nutzers anpassen können. Zum anderen interagieren Nutzer von öffentlichen Displays üblicherweise nur für eine kurze Zeitspannen von ein paar Sekunden. Eine zeitaufwändige Kalibrierung durch den Nutzer vor der eigentlichen Interaktion ist hier im Gegensatz zu Desktop-Szenarien also nicht adäquat. Diese kumulative Dissertation überprüft zunächst die Herausforderungen interaktiver öffentlicher Displays und diskutiert das Potenzial von blickbasierter Interaktion zu deren Bewältigung. Anschließend wird die Implementierung und eingehende Evaluierung von zwei beispielhaften Anwendungen vorgestellt, bei denen Nutzer durch den Blick mit öffentlichen Displays interagieren. Daraus ergeben sich weitere greifbare Vorteile der blickbasierten Interaktion für öffentliche Display-Kontexte. Bei der ersten Anwendung, EyePACT, steht der Parallaxeneffekt im Fokus, der heutzutage häufig ein Problem auf öffentlichen Displays darstellt, die über Berührung (Touch) gesteuert werden. Die zweite Anwendung ist ein neuartiges multimodales System, GTmoPass, das Gaze- und Touch-Eingabe zur sicheren Benutzerauthentifizierung auf öffentlichen Displays kombiniert. GTmoPass ist sehr widerstandsfähig sowohl gegenüber unerwünschten fremden Blicken als auch gegenüber sogenannten thermischen Angriffen und Schmierangriffen. Es bietet damit eine sichere Lösung für ein wichtiges Sicherheits- und Datenschutzproblem auf öffentlichen Displays. Der zweite Teil der Dissertation befasst sich mit spezifischen Herausforderungen der Gaze-Interaktion mit öffentlichen Displays. Zuerst wird der Aspekt der Benutzerpositionierung durch aktives Eye-Tracking adressiert. Der neuartige Prototyp EyeScout bewegt den Eye-Tracker passend zur Position des Nutzers, ohne dass dieser dafür mit weiteren Geräten oder Sensoren ausgestattet werden muss. Dies ermöglicht blickbasierte Interaktion mit öffentlichen Displays auch in jenen Situationen zu untersuchen und zu verstehen, in denen Nutzer in Bewegung sind und sich dem Display von verschiedenen Positionen aus nähern. Zweitens wird das System Read2Calibrate präsentiert, das Eye-Tracker implizit kalibriert, während Nutzer Texte auf Displays lesen. Der Prototyp EyeVote zeigt, wie man die Auswahl von Textantworten auf öffentlichen Displays per Blick ohne Kalibrierung ermöglichen kann. In einer Feldstudie mit EyeVote wird der Kompromiss zwischen Genauigkeit und Auswahlgeschwindigkeit unter der Verwendung kalibrierungsfreier Auswahltechniken untersucht. Die Implikationen der Ergebnisse für das Design von blickbasierter Interaktion öffentlicher Displays werden diskutiert. Abschließend wird erörtert wie die verwendete Methodik auf andere Bereiche, z.B. auf mobilie Geräte, angewendet werden kann

    Next generation analytics for open pervasive display networks

    Get PDF
    Public displays and digital signs are becoming increasingly widely deployed as many spaces move towards becoming highly interactive and augmented environments. Market trends suggest further significant increases in the number of digital signs and both researchers and commercial entities are working on designing and developing novel uses for this technology. Given the level of investment, it is increasingly important to be able to understand the effectiveness of public displays. Current state-of-the-art analytics technology is limited in the extent to which it addresses the challenges that arise from display deployments becoming open (increasing numbers of stakeholders), networked (viewer engagement across devices and locations) and pervasive (high density of displays and sensing technology leading to potential privacy threats for viewers). In this thesis, we provide the first exploration into achieving next generation display analytics in the context of open pervasive display networks. In particular, we investigated three areas of challenge: analytics data capture, reporting and automated use of analytics data. Drawing on the increasing number of stakeholders, we conducted an extensive review of related work to identify data that can be captured by individual stakeholders of a display network, and highlighted the opportunities for gaining insights by combining datasets owned by different stakeholders. Additionally, we identified the importance of viewer-centric analytics that use traditional display-oriented analytics data combined with viewer mobility patterns to produce entirely new sets of analytics reports. We explored a range of approaches to generating viewer-centric analytics including the use of mobility models as a way to create 'synthetic analytics' - an approach that provides highly detailed analytics whilst preserving viewer privacy. We created a collection of novel viewer-centric analytics reports providing insights into how viewers experience a large network of pervasive displays including reports regarding the effectiveness of displays, the visibility of content across the display network, and the visibility of content to viewers. We further identified additional reports specific to those display networks that support the delivery of personalised content to viewers. Additionally, we highlighted the similarities between digital signage and Web analytics and introduced novel forms of digital signage analytics reports created by leveraging existing Web analytics engines. Whilst the majority of analytics systems focus solely on the capture and reporting of analytics insights, we additionally explored the automated use of analytics data. One of the challenges in open pervasive display networks is accommodating potentially competing content scheduling constraints and requirements that originate from the large number of stakeholders - in addition to contextual changes that may originate from analytics insights. To address these challenges, we designed and developed the first lottery scheduling approach for digital signage providing a means to accommodate potentially conflicting scheduling constraints, and supporting context- and event-based scheduling based on analytics data fed back into the digital sign. In order to evaluate the set of systems and approaches presented in this thesis, we conducted large-scale, long-term trials allowing us to show both the technical feasibility of the systems developed and provide insights into the accuracy and performance of different analytics capture technologies. Our work provides a set of tools and techniques for next generation digital signage analytics and lays the foundation for more general people-centric analytics that go beyond the domain of digital signs and enable unique analytical insights and understanding into how users interact across the physical and digital world

    Envisioning social drones in education

    Get PDF
    Education is one of the major application fields in social Human-Robot Interaction. Several forms of social robots have been explored to engage and assist students in the classroom environment, from full-bodied humanoid robots to tabletop robot companions, but flying robots have been left unexplored in this context. In this paper, we present seven online remote workshops conducted with 20 participants to investigate the application area of Education in the Human-Drone Interaction domain; particularly focusing on what roles a social drone could fulfill in a classroom, how it would interact with students, teachers and its environment, what it could look like, and what would specifically differ from other types of social robots used in education. In the workshops we used online collaboration tools, supported by a sketch artist, to help envision a social drone in a classroom. The results revealed several design implications for the roles and capabilities of a social drone, in addition to promising research directions for the development and design in the novel area of drones in education

    A Design Exploration of Health-Related Community Displays

    Get PDF
    The global population is ageing, leading to shifts in healthcare needs. It is well established that increased physical activity can improve the health and wellbeing of many older adults. However, motivation remains a prime concern. We report findings from a series of focus groups where we explored the concept of using community displays to promote physical activity to a local neighborhood. In doing so, we contribute both an understanding of the design space for community displays, as well as a discussion of the implications of our work for the broader CSCW community. We conclude that our work demonstrates the potential for developing community displays for increasing physical activity amongst older adults
    corecore