17 research outputs found

    Fast head tilt detection for human-computer interaction

    No full text
    Abstract. Accurate head tilt detection has a large potential to aid people with disabilities in the use of human-computer interfaces and provide universal access to communication software. We show how it can be utilized to tab through links on a web page or control a video game with head motions. It may also be useful as a correction method for currently available video-based assistive technology that requires upright facial poses. Few of the existing computer vision methods that detect head rotations in and out of the image plane with reasonable accuracy can operate within the context of a real-time communication interface because the computational expense that they incur is too great. Our method uses a variety of metrics to obtain a robust head tilt estimate without incurring the computational cost of previous methods. Our system runs in real time on a computer with a 2.53 GHz processor, 256 MB of RAM and an inexpensive webcam, using only 55 % of the processor cycles.

    Extracting Motion Features for Visual Human Activity Representation

    No full text

    Towards a Fluid Cloud: An Extension of the Cloud into the Local Network

    No full text
    Part 2: Ph.D. Student Workshop — Management of Future NetworkingInternational audienceCloud computing offers an attractive platform to provide resources on-demand, but currently fails to meet the corresponding latency requirements for a wide range of Internet of Things (IoT) applications. In recent years efforts have been made to distribute the cloud closer to the user environment, but they were typically limited to the fixed network infrastructure as current cloud management algorithms cannot cope with the unpredictable nature of wireless networks. This fixed deployment of clouds often does not suffice as, for some applications, the access network itself already introduces an intolerable delay in response time. Therefore we propose to extend the cloud formalism into the (wireless) IoT environment itself, incorporating the infrastructure that is already present. Given the mobile nature of local infrastructure, we refer to this as a fluid extension to the cloud, or more simply as a fluid cloud

    15 seconds of fame

    No full text

    Evaluation of Expression Recognition Techniques

    No full text
    The most expressive way humans display emotions is through facial expressions. In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input. We introduce and test di#erent Bayesian network classifiers for classifying expressions from video. In particular we use Naive-Bayes classifiers and to learn the dependencies among di#erent facial motion features we use Tree-Augmented Naive Bayes (TAN) classifiers. We also investigate a neural network approach. Further, we propose an architecture of hidden Markov models (HMMs) for automatically segmenting and recognizing human facial expression from video sequences. We explore both person-dependent and person-independent recognition of expressions and compare the di#erent methods
    corecore