19,550 research outputs found

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work

    The "Seen but Unnoticed" Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another (UIST 2021 Vision)

    Full text link
    This UIST Vision argues that "touch" input and interaction remains in its infancy when viewed in context of the seen but unnoticed vocabulary of natural human behaviors, activity, and environments that surround direct interaction with displays. Unlike status-quo touch interaction -- a shadowplay of fingers on a single screen -- I argue that our perspective of direct interaction should encompass the full rich context of individual use (whether via touch, sensors, or in combination with other modalities), as well as collaborative activity where people are engaged in local (co-located), remote (tele-present), and hybrid work. We can further view touch through the lens of the "Society of Devices," where each person's activities span many complementary, oft-distinct devices that offer the right task affordance (input modality, screen size, aspect ratio, or simply a distinct surface with dedicated purpose) at the right place and time. While many hints of this vision already exist (see references), I speculate that a comprehensive program of research to systematically inventory, sense, and design interactions around such human behaviors and activities -- and that fully embrace touch as a multi-modal, multi-sensor, multi-user, and multi-device construct -- could revolutionize both individual and collaborative interaction with technology.Comment: 5 pages. Non-archival UIST Vision paper accepted and presented at the 34th Annual ACM Symposium on User Interface Software and Technology (UIST 2021) by Ken Hinckley. This is the definitive "published" version as the Association of Computing Machinery (ACM) does not archive UIST Vision paper

    How to capitalise on mobility, proximity and motion analytics to support formal and informal education?

    Get PDF
    © 2017, CEUR-WS. All rights reserved. Learning Analytics and similar data-intensive approaches aimed at understanding and/or supporting learning have mostly focused on the analysis of students' data automatically captured by personal computers or, more recently, mobile devices. Thus, most student behavioural data are limited to the interactions between students and particular learning applications. However, learning can also occur beyond these interface interactions, for instance while students interact face-to-face with other students or their teachers. Alternatively, some learning tasks may require students to interact with non-digital physical tools, to use the physical space, or to learn in different ways that cannot be mediated by traditional user interfaces (e.g. motor and/or audio learning). The key questions here are: why are we neglecting these kinds of learning activities? How can we provide automated support or feedback to students during these activities? Can we find useful patterns of activity in these physical settings as we have been doing with computer-mediated settings? This position paper is aimed at motivating discussion through a series of questions that can justify the importance of designing technological innovations for physical learning settings where mobility, proximity and motion are tracked, just as digital interactions have been so far

    Desktop-Gluey: Augmenting Desktop Environments with Wearable Devices

    Get PDF
    International audienceUpcoming consumer-ready head-worn displays (HWDs) can play a central role in unifying the interaction experience in Distributed display environments (DDEs). We recently implemented Gluey, a HWD system that 'glues' together the input mechanisms across a display ecosystem to facilitate content migration and seamless interaction across multiple, co-located devices. Gluey can minimize device switching costs, opening new possibilities and scenarios for multi-device interaction. In this paper, we propose Desktop-Gluey, a system to augment situated desktop environments, allowing users to extend the physical displays in their environment, organize information in spatial layouts, and 'carry' desktop content with them. We extend this metaphor beyond the desktop to provide 'anywhere and anytime' support for mobile and collaborative interactions

    EagleSense:tracking people and devices in interactive spaces using real-time top-view depth-sensing

    Get PDF
    Real-time tracking of people's location, orientation and activities is increasingly important for designing novel ubiquitous computing applications. Top-view camera-based tracking avoids occlusion when tracking people while collaborating, but often requires complex tracking systems and advanced computer vision algorithms. To facilitate the prototyping of ubiquitous computing applications for interactive spaces, we developed EagleSense, a real-time human posture and activity recognition system with a single top-view depth sensing camera. We contribute our novel algorithm and processing pipeline, including details for calculating silhouetteextremities features and applying gradient tree boosting classifiers for activity recognition optimised for top-view depth sensing. EagleSense provides easy access to the real-time tracking data and includes tools for facilitating the integration into custom applications. We report the results of a technical evaluation with 12 participants and demonstrate the capabilities of EagleSense with application case studies

    Bonjour! Greeting Gestures for Collocated Interaction with Wearables

    Get PDF
    International audienceWearable devices such as smartwatches (SW) and head-worn displays (HWD) are gaining popularity. To improve the collocated capabilities of wearables, we need to facilitate collocated interaction in a socially acceptable manner. In this paper we propose to explore widespread used greeting gestures such as handshakes or head gestures to perform collocated interactions with wearables. These include pairing devices or information exchange. We analyze the properties of greetings and how they can map to different levels of wearable pairing (family, friend, work, stranger). This paper also suggest how these gestures could be detected with SWs and HWDs

    Interaction techniques for mobile collocation

    Get PDF
    Research on mobile collocated interactions has been exploring situations where collocated users engage in collaborative activities using their personal mobile devices (e.g., smartphones and tablets), thus going from personal/individual toward shared/multiuser experiences and interactions. The proliferation of ever- smaller computers that can be worn on our wrists (e.g., Apple Watch) and other parts of the body (e.g., Google Glass), have expanded the possibilities and increased the complexity of interaction in what we term “mobile collocated” situations. The focus of this workshop is to bring together a community of researchers, designers and practitioners to explore novel interaction techniques for mobile collocated interactions

    Memristor models for machine learning

    Get PDF
    In the quest for alternatives to traditional CMOS, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area- and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work, we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and it is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov's model and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that, although both models could lead to useful memristor based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.Comment: 4 figures, no tables. Submitted to neural computatio

    Emergent behaviors in the Internet of things: The ultimate ultra-large-scale system

    Get PDF
    To reach its potential, the Internet of Things (IoT) must break down the silos that limit applications' interoperability and hinder their manageability. Doing so leads to the building of ultra-large-scale systems (ULSS) in several areas, including autonomous vehicles, smart cities, and smart grids. The scope of ULSS is both large and complex. Thus, the authors propose Hierarchical Emergent Behaviors (HEB), a paradigm that builds on the concepts of emergent behavior and hierarchical organization. Rather than explicitly programming all possible decisions in the vast space of ULSS scenarios, HEB relies on the emergent behaviors induced by local rules at each level of the hierarchy. The authors discuss the modifications to classical IoT architectures required by HEB, as well as the new challenges. They also illustrate the HEB concepts in reference to autonomous vehicles. This use case paves the way to the discussion of new lines of research.Damian Roca work was supported by a Doctoral Scholarship provided by FundaciĂłn La Caixa. This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493) and by the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P).Peer ReviewedPostprint (author's final draft
    • …
    corecore