25,912 research outputs found

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Design and application of a multi-modal process tomography system

    Get PDF
    This paper presents a design and application study of an integrated multi-modal system designed to support a range of common modalities: electrical resistance, electrical capacitance and ultrasonic tomography. Such a system is designed for use with complex processes that exhibit behaviour changes over time and space, and thus demand equally diverse sensing modalities. A multi-modal process tomography system able to exploit multiple sensor modes must permit the integration of their data, probably centred upon a composite process model. The paper presents an overview of this approach followed by an overview of the systems engineering and integrated design constraints. These include a range of hardware oriented challenges: the complexity and specificity of the front end electronics for each modality; the need for front end data pre-processing and packing; the need to integrate the data to facilitate data fusion; and finally the features to enable successful fusion and interpretation. A range of software aspects are also reviewed: the need to support differing front-end sensors for each modality in a generic fashion; the need to communicate with front end data pre-processing and packing systems; the need to integrate the data to allow data fusion; and finally to enable successful interpretation. The review of the system concepts is illustrated with an application to the study of a complex multi-component process

    User tracking using a wearable camera

    Get PDF
    Abstract—This paper addresses automatic indoor user tracking based on fusion of WLAN and image sensing. Our motivation is the increasing prevalence of wearable cameras, some of which can also capture WLAN data. We propose a novel tracking method that can be employed when using image-based, WLAN-based and fusion-based approach only. The effectiveness of combining the strengths of these two complementary modalities is demonstrated for a very challenging data

    A schema for generic process tomography sensors

    Get PDF
    A schema is introduced that aims to facilitate the widespread exploitation of the science of process tomography (PT) that promises a unique multidimensional sensing opportunity. Although PT has been developed to an advanced state, applications have been laboratory or pilot-plant based, configured on an end-to-end basis, and limited typically to the formation of images that attempt to represent process contents. The schema facilitates the fusion of multidimensional internal process state data in terms of a model that yields directly usable process information, either for design model confirmation or for effective plant monitoring or control, here termed a reality visualization model (RVM). A generic view leads to a taxonomy of process types and their respective RVM. An illustrative example is included and a review of typical sensor system components is given

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field

    A phenomenological approach to multisource data integration: Analysing infrared and visible data

    Get PDF
    A new method is described for combining multisensory data for remote sensing applications. The approach uses phenomenological models which allow the specification of discriminatory features that are based on intrinsic physical properties of imaged surfaces. Thermal and visual images of scenes are analyzed to estimate surface heat fluxes. Such analysis makes available a discriminatory feature that is closely related to the thermal capacitance of the imaged objects. This feature provides a method for labelling image regions based on physical properties of imaged objects. This approach is different from existing approaches which use the signal intensities in each channel (or an arbitrary linear or nonlinear combination of signal intensities) as features - which are then classified by a statistical or evident approach

    Dual-sensor fusion for indoor user localisation

    Get PDF
    In this paper we address the automatic identification of in- door locations using a combination of WLAN and image sensing. Our motivation is the increasing prevalence of wear- able cameras, some of which can also capture WLAN data. We propose to use image-based and WLAN-based localisa- tion individually and then fuse the results to obtain better performance overall. We demonstrate the effectiveness of our fusion algorithm for localisation to within a 8.9m2 room on very challenging data both for WLAN and image-based algorithms. We envisage the potential usefulness of our ap- proach in a range of ambient assisted living applications

    Perceptual modalities guiding bat flight in a native habitat

    Get PDF
    Flying animals accomplish high-speed navigation through fields of obstacles using a suite of sensory modalities that blend spatial memory with input from vision, tactile sensing, and, in the case of most bats and some other animals, echolocation. Although a good deal of previous research has been focused on the role of individual modes of sensing in animal locomotion, our understanding of sensory integration and the interplay among modalities is still meager. To understand how bats integrate sensory input from echolocation, vision, and spatial memory, we conducted an experiment in which bats flying in their natural habitat were challenged over the course of several evening emergences with a novel obstacle placed in their flight path. Our analysis of reconstructed flight data suggests that vision, echolocation, and spatial memory together with the possible exercise of an ability in using predictive navigation are mutually reinforcing aspects of a composite perceptual system that guides flight. Together with the recent development in robotics, our paper points to the possible interpretation that while each stream of sensory information plays an important role in bat navigation, it is the emergent effects of combining modalities that enable bats to fly through complex spaces
    corecore