48 research outputs found

    Digital lace:a collision of responsive technologies

    Get PDF
    Designing with properties such as colour-change and light using electronics and digital control brings new challenges within art and design, and a range of new possibilities for aesthetics, tactility and functionality. Heimtextil 2014 (accessed April 2014) promotes emerging materials and technologies as one of four trends which highlight the increasing demand for unique products utilizing novel material properties and digital making. However, there is still limited insight into the creative potential of these materials that are fundamental to the exploitation of 'smart' material properties, the development of new 'smart' surfaces and digital tools that facilitate designing with colour-change and light-emitting properties specific to textiles. This submission to the Fiber arts category presents new material concepts as Digital Lace: a novel, multifaceted textile which will be presented as an interactive table runner for a digitally manufactured console table. Digital Lace explicitly pools together the digital-craft skills base and disparate expertise of printed textile practitioner and thermochromic specialist, Sara Robertson (SR) and constructed textile practitioner and light-emitting optical fibre specialist, Sarah Taylor (ST). Within the context of 'smart', material development and experimentation, Digital lace exploits and amalgamates the responsive technologies of dye and fibre with digital-control

    Arfid: A Reconfigurable Fabric of Input Devices for the Internet of Things

    Get PDF
    Abstract Low-cost, easily deployable, reconfigurable, movable input devices can enable adaptive workflows in commercial, industrial, and home environments. A key limitation of previous reconfigurable control systems is their high cost or maintenance burden (e.g., battery changes or wiring setup). Our poster presents Arfid, a "fabric" for reconfigurable input devices that connects low-cost, battery-free inputs to arbitrarily specified functions in their surroundings via a buildingwide network of RFID readers. Users can reassign controllers' functions using a simple web interface

    A Body-and-Mind-Centric Approach to Wearable Personal Assistants

    Get PDF

    Finding Common Ground: A Survey of Capacitive Sensing in Human-Computer Interaction

    Get PDF
    For more than two decades, capacitive sensing has played a prominent role in human-computer interaction research. Capacitive sensing has become ubiquitous on mobile, wearable, and stationary devices---enabling fundamentally new interaction techniques on, above, and around them. The research community has also enabled human position estimation and whole-body gestural interaction in instrumented environments. However, the broad field of capacitive sensing research has become fragmented by different approaches and terminology used across the various domains. This paper strives to unify the field by advocating consistent terminology and proposing a new taxonomy to classify capacitive sensing approaches. Our extensive survey provides an analysis and review of past research and identifies challenges for future work. We aim to create a common understanding within the field of human-computer interaction, for researchers and practitioners alike, and to stimulate and facilitate future research in capacitive sensing

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF

    Seven Years after the Manifesto: Literature Review and Research Directions for Technologies in Animal Computer Interaction

    Get PDF
    As technologies diversify and become embedded in everyday lives, the technologies we expose to animals, and the new technologies being developed for animals within the field of Animal Computer Interaction (ACI) are increasing. As we approach seven years since the ACI manifesto, which grounded the field within Human Computer Interaction and Computer Science, this thematic literature review looks at the technologies developed for (non-human) animals. Technologies that are analysed include tangible and physical, haptic and wearable, olfactory, screen technology and tracking systems. The conversation explores what exactly ACI is whilst questioning what it means to be animal by considering the impact and loop between machine and animal interactivity. The findings of this review are expected to form the first grounding foundation of ACI technologies informing future research in animal computing as well as suggesting future areas for exploratio

    Designing gaze-based interaction for pervasive public displays

    Get PDF
    The last decade witnessed an increasing adoption of public interactive displays. Displays can now be seen in many public areas, such as shopping malls, and train stations. There is also a growing trend towards using large public displays especially in airports, urban areas, universities and libraries. Meanwhile, advances in eye tracking and visual computing promise straightforward integration of eye tracking on these displays for both: 1) monitoring the user's visual behavior to evaluate different aspects of the display, such as measuring the visual attention of passersby, and for 2) interaction purposes, such as allowing users to provide input, retrieve content, or transfer data using their eye movements. Gaze is particularly useful for pervasive public displays. In addition to being natural and intuitive, eye gaze can be detected from a distance, bringing interactivity to displays that are physically unreachable. Gaze reflects the user's intention and visual interests, and its subtle nature makes it well-suited for public interactions where social embarrassment and privacy concerns might hinder the experience. On the downside, eye tracking technologies have traditionally been developed for desktop settings, where a user interacts from a stationary position and for a relatively long period of time. Interaction with public displays is fundamentally different and hence poses unique challenges when employing eye tracking. First, users of public displays are dynamic; users could approach the display from different directions, and interact from different positions or even while moving. This means that gaze-enabled displays should not expect users to be stationary at a specific position, but instead adapt to users' ever-changing position in front of the display. Second, users of public displays typically interact for short durations, often for a few seconds only. This means that contrary to desktop settings, public displays cannot afford requiring users to perform time-consuming calibration prior to interaction. In this publications-based dissertation, we first report on a review of challenges of interactive public displays, and discuss the potential of gaze in addressing these challenges. We then showcase the implementation and in-depth evaluation of two applications where gaze is leveraged to address core problems in today's public displays. The first presents an eye-based solution, EyePACT, that tackles the parallax effect which is often experienced on today's touch-based public displays. We found that EyePACT significantly improves accuracy even with varying degrees of parallax. The second is a novel multimodal system, GTmoPass, that combines gaze and touch input for secure user authentication on public displays. GTmoPass was found to be highly resilient to shoulder surfing, thermal attacks and smudge attacks, thereby offering a secure solution to an important problem on public displays. The second part of the dissertation explores specific challenges of gaze-based interaction with public displays. First, we address the user positioning problem by means of active eye tracking. More specifically, we built a novel prototype, EyeScout, that dynamically moves the eye tracker based on the user's position without augmenting the user. This, in turn, allowed us to study and understand gaze-based interaction with public displays while walking, and when approaching the display from different positions. An evaluation revealed that EyeScout is well perceived by users, and improves the time needed to initiate gaze interaction by 62% compared to state-of-the-art. Second, we propose a system, Read2Calibrate, for calibrating eye trackers implicitly while users read text on displays. We found that although text-based calibration is less accurate than traditional methods, it integrates smoothly while reading and thereby more suitable for public displays. Finally, through our prototype system, EyeVote, we show how to allow users to select textual options on public displays via gaze without calibration. In a field deployment of EyeVote, we studied the trade-off between accuracy and selection speed when using calibration-free selection techniques. We found that users of public displays value faster interactions over accurate ones, and are willing to correct system errors in case of inaccuracies. We conclude by discussing the implications of our findings on the design of gaze-based interaction for public displays, and how our work can be adapted for other domains apart from public displays, such as on handheld mobile devices.In den letzten zehn Jahren wurden vermehrt interaktive Displays in öffentlichen Bereichen wie Einkaufszentren, Flughäfen und Bahnhöfen eingesetzt. Große öffentliche Displays finden sich zunehmend in städtischen Gebieten, beispielsweise in Universitäten und Bibliotheken. Fortschritte in der Eye-Tracking-Technologie und der Bildverarbeitung versprechen eine einfache Integration von Eye-Tracking auf diesen Displays. So kann zum einen das visuelle Verhalten der Benutzer verfolgt und damit ein Display nach verschiedenen Aspekten evaluiert werden. Zum anderen eröffnet Eye-Tracking auf öffentlichen Displays neue Interaktionsmöglichkeiten. Blickbasierte Interaktion ist besonders nützlich für Bildschirme im allgegenwärtigen öffentlichen Raum. Der Blick bietet mehr als eine natürliche und intuitive Interaktionsmethode: Blicke können aus der Ferne erkannt und somit für Interaktion mit sonst unerreichbaren Displays genutzt werden. Aus der Interaktion mit dem Blick (Gaze) lassen sich Absichten und visuelle Interessen der Benutzer ableiten. Dadurch eignet es sich besonders für den öffentlichen Raum, wo Nutzer möglicherweise Datenschutzbedenken haben könnten oder sich bei herkömmlichen Methoden gehemmt fühlen würden in der Öffentlichkeit mit den Displays zu interagieren. Dadurch wird ein uneingeschränktes Nutzererlebnis ermöglicht. Eye-Tracking-Technologien sind jedoch in erster Linie für Desktop-Szenarien entwickelt worden, bei denen ein Benutzer für eine relativ lange Zeitspanne in einer stationären Position mit dem System interagiert. Die Interaktion mit öffentlichen Displays ist jedoch grundlegend anders. Daher gilt es völlig neuartige Herausforderungen zu bewältigen, wenn Eye-Tracking eingesetzt wird. Da sich Nutzer von öffentlichen Displays bewegen, können sie sich dem Display aus verschiedenen Richtungen nähern und sogar währenddessen mit dem Display interagieren. Folglich sollten "Gaze-enabled Displays" nicht davon ausgehen, dass Nutzer sich stets an einer bestimmten Position befinden, sondern sollten sich an die ständig wechselnde Position des Nutzers anpassen können. Zum anderen interagieren Nutzer von öffentlichen Displays üblicherweise nur für eine kurze Zeitspannen von ein paar Sekunden. Eine zeitaufwändige Kalibrierung durch den Nutzer vor der eigentlichen Interaktion ist hier im Gegensatz zu Desktop-Szenarien also nicht adäquat. Diese kumulative Dissertation überprüft zunächst die Herausforderungen interaktiver öffentlicher Displays und diskutiert das Potenzial von blickbasierter Interaktion zu deren Bewältigung. Anschließend wird die Implementierung und eingehende Evaluierung von zwei beispielhaften Anwendungen vorgestellt, bei denen Nutzer durch den Blick mit öffentlichen Displays interagieren. Daraus ergeben sich weitere greifbare Vorteile der blickbasierten Interaktion für öffentliche Display-Kontexte. Bei der ersten Anwendung, EyePACT, steht der Parallaxeneffekt im Fokus, der heutzutage häufig ein Problem auf öffentlichen Displays darstellt, die über Berührung (Touch) gesteuert werden. Die zweite Anwendung ist ein neuartiges multimodales System, GTmoPass, das Gaze- und Touch-Eingabe zur sicheren Benutzerauthentifizierung auf öffentlichen Displays kombiniert. GTmoPass ist sehr widerstandsfähig sowohl gegenüber unerwünschten fremden Blicken als auch gegenüber sogenannten thermischen Angriffen und Schmierangriffen. Es bietet damit eine sichere Lösung für ein wichtiges Sicherheits- und Datenschutzproblem auf öffentlichen Displays. Der zweite Teil der Dissertation befasst sich mit spezifischen Herausforderungen der Gaze-Interaktion mit öffentlichen Displays. Zuerst wird der Aspekt der Benutzerpositionierung durch aktives Eye-Tracking adressiert. Der neuartige Prototyp EyeScout bewegt den Eye-Tracker passend zur Position des Nutzers, ohne dass dieser dafür mit weiteren Geräten oder Sensoren ausgestattet werden muss. Dies ermöglicht blickbasierte Interaktion mit öffentlichen Displays auch in jenen Situationen zu untersuchen und zu verstehen, in denen Nutzer in Bewegung sind und sich dem Display von verschiedenen Positionen aus nähern. Zweitens wird das System Read2Calibrate präsentiert, das Eye-Tracker implizit kalibriert, während Nutzer Texte auf Displays lesen. Der Prototyp EyeVote zeigt, wie man die Auswahl von Textantworten auf öffentlichen Displays per Blick ohne Kalibrierung ermöglichen kann. In einer Feldstudie mit EyeVote wird der Kompromiss zwischen Genauigkeit und Auswahlgeschwindigkeit unter der Verwendung kalibrierungsfreier Auswahltechniken untersucht. Die Implikationen der Ergebnisse für das Design von blickbasierter Interaktion öffentlicher Displays werden diskutiert. Abschließend wird erörtert wie die verwendete Methodik auf andere Bereiche, z.B. auf mobilie Geräte, angewendet werden kann

    Pedestrian Counting Based on Piezoelectric Vibration Sensor

    Get PDF
    Pedestrian counting has attracted much interest of the academic and industry communities for its widespread application in many real-world scenarios. While many recent studies have focused on computer vision-based solutions for the problem, the deployment of cameras brings up concerns about privacy invasion. This paper proposes a novel indoor pedestrian counting approach, based on footstep-induced structural vibration signals with piezoelectric sensors. The approach is privacy-protecting because no audio or video data is acquired. Our approach analyzes the space-differential features from the vibration signals caused by pedestrian footsteps and outputs the number of pedestrians. The proposed approach supports multiple pedestrians walking together with signal mixture. Moreover, it makes no requirement about the number of groups of walking people in the detection area. The experimental results show that the averaged F1-score of our approach is over 0.98, which is better than the vibration signal-based state-of-the-art methods.Peer reviewe

    Digitizing the chemical senses: possibilities & pitfalls

    Get PDF
    Many people are understandably excited by the suggestion that the chemical senses can be digitized; be it to deliver ambient fragrances (e.g., in virtual reality or health-related applications), or else to transmit flavour experiences via the internet. However, to date, progress in this area has been surprisingly slow. Furthermore, the majority of the attempts at successful commercialization have failed, often in the face of consumer ambivalence over the perceived benefits/utility. In this review, with the focus squarely on the domain of Human-Computer Interaction (HCI), we summarize the state-of-the-art in the area. We highlight the key possibilities and pitfalls as far as stimulating the so-called ‘lower’ senses of taste, smell, and the trigeminal system are concerned. Ultimately, we suggest that mixed reality solutions are currently the most plausible as far as delivering (or rather modulating) flavour experiences digitally is concerned. The key problems with digital fragrance delivery are related to attention and attribution. People often fail to detect fragrances when they are concentrating on something else; And even when they detect that their chemical senses have been stimulated, there is always a danger that they attribute their experience (e.g., pleasure) to one of the other senses – this is what we call ‘the fundamental attribution error’. We conclude with an outlook on digitizing the chemical senses and summarize a set of open-ended questions that the HCI community has to address in future explorations of smell and taste as interaction modalities
    corecore