301 research outputs found

    Human activity recognition for pervasive interaction

    Get PDF
    PhD ThesisThis thesis addresses the challenge of computing food preparation context in the kitchen. The automatic recognition of fine-grained human activities and food ingredients is realized through pervasive sensing which we achieve by instrumenting kitchen objects such as knives, spoons, and chopping boards with sensors. Context recognition in the kitchen lies at the heart of a broad range of real-world applications. In particular, activity and food ingredient recognition in the kitchen is an essential component for situated services such as automatic prompting services for cognitively impaired kitchen users and digital situated support for healthier eating interventions. Previous works, however, have addressed the activity recognition problem by exploring high-level-human activities using wearable sensing (i.e. worn sensors on human body) or using technologies that raise privacy concerns (i.e. computer vision). Although such approaches have yielded significant results for a number of activity recognition problems, they are not applicable to our domain of investigation, for which we argue that the technology itself must be genuinely “invisible”, thereby allowing users to perform their activities in a completely natural manner. In this thesis we describe the development of pervasive sensing technologies and algorithms for finegrained human activity and food ingredient recognition in the kitchen. After reviewing previous work on food and activity recognition we present three systems that constitute increasingly sophisticated approaches to the challenge of kitchen context recognition. Two of these systems, Slice&Dice and Classbased Threshold Dynamic Time Warping (CBT-DTW), recognize fine-grained food preparation activities. Slice&Dice is a proof-of-concept application, whereas CBT-DTW is a real-time application that also addresses the problem of recognising unknown activities. The final system, KitchenSense is a real-time context recognition framework that deals with the recognition of a more complex set of activities, and includes the recognition of food ingredients and events in the kitchen. For each system, we describe the prototyping of pervasive sensing technologies, algorithms, as well as real-world experiments and empirical evaluations that validate the proposed solutions.Vietnamese government’s 322 project, executed by the Vietnamese Ministry of Education and Training

    Pervasive interaction across displays

    Get PDF
    Digital screens are becoming more and more ubiquitous. Resolution and size are increasing, and, at the same time, prices for displays are falling. Large display installations are increasingly appearing in public spaces as well as in home and office environments. We expect this trend to continue, making wall-size displays commonplace in the next decade. With this development, all three classes of devices described by Mark Weiser - pads, tabs, and boards - will be mainstream. Pads (tablets), tabs (smartphones), and boards (displays) let us show and interact with data in different situations, because each device class is optimized for a certain use case. Consequently, the use of multiple devices becomes common—for example, the use of second screens while watching TV is becoming the norm. However, the use of multiple devices requires seamless transitions between devices, mechanisms for exchanging data, and the ability to move content from one device to another and to remotely access or control the data. Back in 1998, Michael Beigle and his colleagues proposed dynamically and automatically distributing Web-based content to different output devices in a smart environment. A few years later, Roy Want and his colleagues suggested using interfaces in our environment to interact with our personal data. Because mobile devices or notebooks often provide only a small screen for output and limited input techniques, they proposed using office screens or public displays to create a more enjoyable user experience. They also argued for having physical access to private data. These examples highlight that research in ubiquitous computing was already early on exploring interaction across pervasive devices, displays, and content. Current products support both visions. On one hand, there are devices that provide options to present remote data on a screen in the environment with the control residing on the mobile device. On the other hand, there are means to easily present content from mobile devices on remote displays. There are now also many cloud-based products for interacting with data on multiple devices. For example, Dropbox provides access to all text documents and images. Spotify lets you enjoy your favorite music on smartphones, tablets, notebooks, and music systems. Furthermore, people are starting to use mobile devices as remote controls for large screens, smart TVs, or music systems. All these examples show that streaming and connecting different devices ubiquitously are key technologies for smart environments. Here, we present a few commercially available technologies supporting this and provide an outlook on how displays might become a service themselves

    Latter-Day Constitutionalism: Sexuality, Gender, and Mormons

    Get PDF
    The extensive involvement of the Church of Jesus Christ of Latter- day Saints in the campaign that in 2008 overrode gay marriage in California brought sharp scrutiny to the interaction of Mormon theology and public constitutionalism. This Article explores Latter-day constitutionalism as an important normative phenomenon that illustrates the deep and pervasive interaction among social norms, constitutional rights, and faith-based discourse

    Low-fi skin vision: A case study in rapid prototyping a sensory substitution system

    Get PDF
    We describe the design process we have used to develop a minimal, twenty vibration motor Tactile Vision Sensory Substitution (TVSS) system which enables blind-folded subjects to successfully track and bat a rolling ball and thereby experience 'skin vision'. We have employed a low-fi rapid prototyping approach to build this system and argue that this methodology is particularly effective for building embedded interactive systems. We support this argument in two ways. First, by drawing on theoretical insights from robotics, a discipline that also has to deal with the challenge of building complex embedded systems that interact with their environments; second, by using the development of our TVSS as a case study: describing the series of prototypes that led to our successful design and highlighting what we learnt at each stage

    An Introduction to Pervasive Interface Automata

    Get PDF
    Pervasive systems are often context-dependent, component based systems in which components expose interfaces and offer one or more services. These systems may evolve in unpredictable ways, often through component replacement. We present pervasive interface automata as a formalism for modelling components and their composition. Pervasive interface automata are based on the interface automata of Henzinger et al, with several significant differences. We expand their notion of input and output actions to combinations of input, output actions, and callable methods and method calls. Whereas interfaces automata have a refinement relation, we argue the crucial relation in pervasive systems is component replacement, which must include consideration of the services offered by a component and assumptions about the environment. We illustrate pervasive interface autmotata and component replacement with a small case study of a pervasive application for sports predictions

    "All Different" and "All the Same": Some Shared Aspects of American Culture

    Full text link
    This is an unpublished lecture given at The International Days for Peace (Le MĂ©morial de Caen) in France, 2002. The version made available in Digital Common was supplied by the author

    A stroking device for spatially separated couples

    Get PDF
    In this paper we present a device to support the communication of couples in long-distance relationships. While a synchronous exchange of factual information over distance is supported by telephone, e-mail and chat-systems, the transmission of nonverbal aspects of communication is still unsatisfactory. Videocalls let us see the partners’ facial expression in real time. However, to experience a more intimate conversation physical closeness is needed. Stroking while holding hands is a special and emotional gesture for couples. Hence, we developed a device that enables couples to exchange the physical gesture of stroking regardless of distance and location. The device allows both sending and receiving. A user test supported our concept and provided new insights for future development

    A FRAMEWORK FOR INTELLIGENT VOICE-ENABLED E-EDUCATION SYSTEMS

    Get PDF
    Although the Internet has received significant attention in recent years, voice is still the most convenient and natural way of communicating between human to human or human to computer. In voice applications, users may have different needs which will require the ability of the system to reason, make decisions, be flexible and adapt to requests during interaction. These needs have placed new requirements in voice application development such as use of advanced models, techniques and methodologies which take into account the needs of different users and environments. The ability of a system to behave close to human reasoning is often mentioned as one of the major requirements for the development of voice applications. In this paper, we present a framework for an intelligent voice-enabled e-Education application and an adaptation of the framework for the development of a prototype Course Registration and Examination (CourseRegExamOnline) module. This study is a preliminary report of an ongoing e-Education project containing the following modules: enrollment, course registration and examination, enquiries/information, messaging/collaboration, e-Learning and library. The CourseRegExamOnline module was developed using VoiceXML for the voice user interface(VUI), PHP for the web user interface (WUI), Apache as the middle-ware and MySQL database as back-end. The system would offer dual access modes using the VUI and WUI. The framework would serve as a reference model for developing voice-based e-Education applications. The e-Education system when fully developed would meet the needs of students who are normal users and those with certain forms of disabilities such as visual impairment, repetitive strain injury (RSI), etc, that make reading and writing difficult
    • …
    corecore