31 research outputs found

    A framework for cloud-based context-aware information services for citizens in smart cities

    Get PDF
    © 2014 Khan et al.; licensee Springer. Background: In the context of smart cities, public participation and citizen science are key ingredients for informed and intelligent planning decisions and policy-making. However, citizens face a practical challenge in formulating coherent information sets from the large volumes of data available to them. These large data volumes materialise due to the increased utilisation of information and communication technologies in urban settings and local authorities’ reliance on such technologies to govern urban settlements efficiently. To encourage effective public participation in urban governance of smart cities, the public needs to be facilitated with the right contextual information about the characteristics and processes of their urban surroundings in order to contribute to the aspects of urban governance that affect them such as socio-economic activities, quality of life, citizens well-being etc. The cities on the other hand face challenges in terms of crowd sourcing with quality data collection and standardisation, services inter-operability, provisioning of computational and data storage infrastructure. Focus: In this paper, we highlight the issues that give rise to these multi-faceted challenges for citizens and public administrations of smart cities, identify the artefacts and stakeholders involved at both ends of the spectrum (data/service producers and consumers) and propose a conceptual framework to address these challenges. Based upon this conceptual framework, we present a Cloud-based architecture for context-aware citizen services for smart cities and discuss the components of the architecture through a common smart city scenario. A proof of concept implementation of the proposed architecture is also presented and evaluated. The results show the effectiveness of the cloud-based infrastructure for the development of a contextual service for citizens

    Spatio-temporal caricature effects for facial motion

    No full text
    Caricature effects (=recognition advantage for slightly caricatured stimuli) have been robustly established for static pictures of faces (e.g., Rhodes et al., 1987; Benson Perrett, 1994). It has been shown recently that temporal or spatial exaggerations of complex body movements improve recognition of individuals from point light displays (Hill Pollick, 2000; Pollick et al. 2001). Here, we investigate whether similar caricature effects can be established for facial movements. We generated spatio-temporal caricatures of facial movements by combining a new algorithm for the linear combination of complex movement sequences (spatio- temporal morphable models; Giese et al., 2002) with a technique for the animation of photo-realistic head models (Blanz Vetter, 1999). In a first experiment we tested the quality of this linear combination technique. Naturalness ratings from 7 observers were obtained. They had to rate an average-shaped head model, which was animated with three classes of motion trajectories: 1) original motion capture data, 2) approximations of the trajectories by the linear combination model, and 3) morphs between facial movement sequences of two different individuals. We found that the approximations were perceived as natural as the originals. Unexpectedly, the morphs were perceived as even more natural (t(6)=4.6, p<.01) than the original trajectories and their approximations. This might reflect the fact that the morphs tend to average out extreme movements. In a second experiment 14 observers had to distinguish between characteristic facial movements of two individuals applied to a face with average shape. The movements were presented with three different caricature levels (100, 125, 150). We found a significant caricature effect: 150 caricatures were recognized better than the non-caricatured patterns (t(13)=2.5, p<.05). This result suggests that spatio-temporal exaggeration improves the recognition of identity from facial movements

    Facial motion and caricature effect

    No full text

    Automatic synthesis of sequences of human movements by linear combination of learned example patterns

    No full text
    We present a method for the synthesis of sequences of realistically looking human movements from learned example patterns. We apply this technique for the synthesis of dynamic facial expressions. Sequences of facial movements are decomposed into individual movement elements which are modeled by linear combinations of learned examples. The weights of the linear combinations define an abstract pattern space that permits a simple modification and parameterization of the style of the individual movement elements. The elements are defined in a way that is suitable for a simple automatic resynthesis of longer sequences from movement elements with different styles. We demonstrate the efficiency of this technique for the animation of a 3D head model and discuss how it can be used to generate spatio-temporally exaggerated sequences of facial expressions for psychophysical experiments on caricature effects

    Facial motion can determine facial identity

    No full text

    Using facial texture manipulation to study facial motion perception

    No full text
    Manipulated still images of faces have often been used as stimuli for psychophysical research on human perception of faces and facial expressions. In everyday life, however, humans are usually confronted with moving faces. We describe an automated way of performing manipulations on facial video recordings and how it can be applied to investigate human dynamic face perception

    Spatio-temporal Caricatures of Facial Motion

    No full text
    It is well established that there is a recognition advantage for slightly caricatured versions of static pictures of faces (e.g., Rhodes et al., 1987, Cognitive Psychology, 473- 497; Benson Perrett, 1994, Perception, 75-93). Recently, similar caricature eects have been shown using temporal or spatial exaggerations of complex body movements (point light displays) (Hill Pollick, 2000, Psychological Science, 223-228; Pollick et al. 2001, Perception, 323-338). Here, we generated spatio-temporal caricatures of facial movements using a motion morphing technique developed by Giese Poggio (2000, International Journal of Computer Vision, 59-732000) to investigate whether identication from facial motion can be improved by caricaturing. The motion caricaturing was accomplished using hierarchical spatio-temporal morphable models (HSTMM). This technique represents complex motion sequences by linear combinations of learned prototypical movement elements. Facial motion trajectories of 72 re ecting markers were obtained using a commercial 3D motion capture system (VICON). These original trajectories and the morphed or exaggerated versions are applied to photo-realistic head models (Blanz Vetter, 1999, SIGGRAPH: 187-194) using a commercial face animation software (famous3D Pty. Ltd.). In a rst experiment which employed motion data captured from 2D videos, we tested the quality of this linear combination technique. Naturalness ratings from 7 observers were obtained. They had to rate an averageshaped head model, which was animated with three classes of motion trajectories: 1) original motion capture data, 2) approximations of the trajectories by the linear combination model, and 3) morphs between facial movement sequences of two dierent individuals. We found that the approximations were perceived as natural as the originals. Unexpectedly, the morphs were perceived as even more natural (t(6)=4.6, p<.01) than the original trajectories and their approximations. This might re ect the fact that the morphs tend to average out extreme movements. In a second experiment 14 observers had to distinguish between characteristic facial movements of two individuals applied to a face with average shape. The movements were presented with three different caricature levels (100, 125, 150). We found a signicant caricature eect: 150 caricatures were recognized better than the non-caricatured patterns (t(13)=2.5, p<.05). This result suggests that spatio-temporal exaggeration improves the recognition of identity from facial movements. We are currently investigating whether this result generalizes to the 3D motion data and to dierent types of facial motion (e.g., rigid head motion versus non-rigid deformation of the face)

    Spatio-temporal caricature effects for facial motion

    No full text

    Facial motion and the perception of facial attractiveness

    No full text
    corecore