40 research outputs found

    A framework for cloud-based context-aware information services for citizens in smart cities

    Get PDF
    © 2014 Khan et al.; licensee Springer. Background: In the context of smart cities, public participation and citizen science are key ingredients for informed and intelligent planning decisions and policy-making. However, citizens face a practical challenge in formulating coherent information sets from the large volumes of data available to them. These large data volumes materialise due to the increased utilisation of information and communication technologies in urban settings and local authorities’ reliance on such technologies to govern urban settlements efficiently. To encourage effective public participation in urban governance of smart cities, the public needs to be facilitated with the right contextual information about the characteristics and processes of their urban surroundings in order to contribute to the aspects of urban governance that affect them such as socio-economic activities, quality of life, citizens well-being etc. The cities on the other hand face challenges in terms of crowd sourcing with quality data collection and standardisation, services inter-operability, provisioning of computational and data storage infrastructure. Focus: In this paper, we highlight the issues that give rise to these multi-faceted challenges for citizens and public administrations of smart cities, identify the artefacts and stakeholders involved at both ends of the spectrum (data/service producers and consumers) and propose a conceptual framework to address these challenges. Based upon this conceptual framework, we present a Cloud-based architecture for context-aware citizen services for smart cities and discuss the components of the architecture through a common smart city scenario. A proof of concept implementation of the proposed architecture is also presented and evaluated. The results show the effectiveness of the cloud-based infrastructure for the development of a contextual service for citizens

    Faces in Motion: Psychophysical Investigation of the Role of Facial Motion and Viewpoint Changes for Human Face Perception and its Implications for the Multi-Dimensional Face Space Framework

    No full text
    Gesichter zu erkennen und zu identifizieren ist eine extrem wichtige soziale Fähigkeit, die der Mensch geradezu mühelos im tagtäglichen Leben anwendet. Wenn wir zum Beispiel an einem überfüllten Bahnhof nach einer bekannten Person suchen, werden wir vielleicht zunächst nach Personen mit bestimmter Kleidung, Statur und Frisur suchen. Letztlich jedoch ist es das Gesicht, welches die letzten Zweifel an der Identität einer Person zerstreut. Diese Tatsache machen sich auch automatische Sicherheitssysteme zunutze, indem sie Photos von Gesichtern analysieren. Aber können einzelne Bilder von Gesichtern tatsächlich die gesamte identitätsspezifische Information erfassen, die Gesichter zur Verfügung stellen? Gesichter sind dreidimensionale Objekte, die sich ständig in Bewegung befinden. Wenn wir reden oder lachen bewegen sich unsere Gesichter in rigider und nicht-rigider Art und Weise. Nicht-rigide Gesichtsbewegungen sind Deformationen des Gesichts, z.B. Gesichtsausdrücke. Rigide Gesichtsbewegungen umfassen Rotation und Translation des ganzen Kopfes. Während es bekannt ist, dass solche Gesichtsbewegung die nicht-verbale Kommunikation unterstützen, ist es weniger klar welche Rolle Gesichtsbewegung bei der Gesichtserkennung spielt. Jede Person bewegt ihr Gesicht auf ihre ganz eigene Art und Weise. Verwendet das menschliche Gehirn solche individuellen Unterschiede in der Gesichtsmimik, um Gesichter zu erkennen und zu unterscheiden? Die vorliegende Arbeit zeigt einerseits auf, dass auch individuelle Gesichtsbewegung eine wichtige Rolle für die menschliche Gesichtserkennung spielt, und zeigt andererseits Möglichkeiten, wie Gesichtsbewegung und Änderungen in der Ansicht innerhalb eines multi-dimensionalen Gesichter-Raumes repräsentiert sein können

    Spiegelversuche zur möglichen Selbsterkennung bei Keas (Nestor notabilis, Gould)

    No full text

    Charakteristische Bewegung von Gesichtern trägt zur Bestimmung ihrer Identität bei

    No full text
    Gesichter, die uns im täglichen Leben begegnen, sind ständig in Bewegung. Mimik ist ein wesentlicher Bestandteil non-verbaler Kommunikation. Während wir lachen, reden etc. deformiert sich unser Gesicht ständig. Kann uns diese charakteristische Bewegung bei der Identifizierung von Gesichtern helfen? Um diese Frage zu untersuchen, haben wir Morph-Sequenzen von Köpfen der MPI Datenbank mit der Mimik von zwei verschiedenen Personen animiert. Während der Lernphase sahen Versuchspersonen (VPn) das Gesicht A animiert mit Bewegung A und Gesicht B animiert mit Bewegung B. In der Testphase wurden Morphs (Form, ohne Textur) zwischen Gesicht A und Gesicht B gezeigt. Die VPn sollten entscheiden, ob sie Gesicht A oder B sahen. Die Morphs waren entweder mit der Bewegung A oder mit der Bewegung B animiert. Auf allen Morph-Stufen (10 Schritte) bewirkte die charakteristische Bewegung eine Verschiebung der Wahrnehmung in Richtung der Person, mit dessen Bewegung die Köpfe animiert worden waren. Besonders deutlich war der Effekt auf der 50 Morph-Stufe, wo die Forminformation sehr zweideutig ist. Dort wurde in 80 aller Tests mit "Gesicht A" geantwortet, wenn der Morph mit Bewegung A animiert war. Wenn der gleiche Morph mit Bewegung B animiert wurde, war die Antwort in nur 40 aller Tests "Person A". Die Daten belegen, dass charakteristische Bewegung in Gesichtern bei der Feststellung der Identität verwendet wird. Besonders dann, wenn die Forminformation zweideutig ist, wächst die Bedeutung der Mimik. Die Möglichkeit, mit Hilfe von moderner Software-Technologie Form-, Textur- und Bewegungs-Information in Gesichtern unabhängig voneinander untersuchen zu können, weckt Hoffnung, dem Verständniss der Gesichtererkennung etwas näher zu kommen

    Characteristic motion of human face and human form

    No full text
    Do object representations contain information about characteristic motion as well as characteristic form? To address this question we recorded face and body motion of human actors and applied these patterns to computer models. During an incidental learning phase observers were asked to make trait judgments about these animated faces (experiment 1) or characters (experiment 2). During training, the faces and characters always moved with the motion of one particular actor. For example, face A was always animated with actor A's motion, and face B with actor B's motion. In tests, stimuli were either consistent (face A/actor A) or inconsistent (face A/actor B) relative to training. In addition, we systematically introduced ambiguity to the form of the stimuli (eg morphing between face A and face B). Results indicate that as form becomes less informative, observers' responses become biased by the incidentally learned motion patterns. We conclude that information about characteristic motion seems to be part of the representation of these objects. As shape and motion information can be combined independently with this technique, future studies will allow us to quantify the relative importance of characteristic motion versus characteristic form

    Spatio-temporal caricature effects for facial motion

    No full text
    Caricature effects (=recognition advantage for slightly caricatured stimuli) have been robustly established for static pictures of faces (e.g., Rhodes et al., 1987; Benson Perrett, 1994). It has been shown recently that temporal or spatial exaggerations of complex body movements improve recognition of individuals from point light displays (Hill Pollick, 2000; Pollick et al. 2001). Here, we investigate whether similar caricature effects can be established for facial movements. We generated spatio-temporal caricatures of facial movements by combining a new algorithm for the linear combination of complex movement sequences (spatio- temporal morphable models; Giese et al., 2002) with a technique for the animation of photo-realistic head models (Blanz Vetter, 1999). In a first experiment we tested the quality of this linear combination technique. Naturalness ratings from 7 observers were obtained. They had to rate an average-shaped head model, which was animated with three classes of motion trajectories: 1) original motion capture data, 2) approximations of the trajectories by the linear combination model, and 3) morphs between facial movement sequences of two different individuals. We found that the approximations were perceived as natural as the originals. Unexpectedly, the morphs were perceived as even more natural (t(6)=4.6, p<.01) than the original trajectories and their approximations. This might reflect the fact that the morphs tend to average out extreme movements. In a second experiment 14 observers had to distinguish between characteristic facial movements of two individuals applied to a face with average shape. The movements were presented with three different caricature levels (100, 125, 150). We found a significant caricature effect: 150 caricatures were recognized better than the non-caricatured patterns (t(13)=2.5, p<.05). This result suggests that spatio-temporal exaggeration improves the recognition of identity from facial movements

    The caricature effect across viewpoint changes in face perception

    No full text
    The finding that caricatures are recognized more quickly and accurately than veridical faces has been demonstrated only for frontal views of human faces (e.g., Benson Perrett, 1994). In the present study, we investigated whether there is also a “caricature effect” for three-quarter and profile views. Furthermore, we examined what happens to the caricature advantage when generalizing across view changes. We applied a 3D caricature algorithm to laser scanned head models. In a sequential matching task, we systematically varied the view of the target faces (left/right profile, left/right three-quarter, full-face), the view of the test faces (left/right profile, left/right threequarter, fullface) and the face type (anticaricature, veridical, caricature). The caricature effect was replicated for frontal views. We also found a clear caricature advantage for three-quarter and profile views. When generalizing across views, the caricature advantage was present for the majority of view change conditions. In a few conditions, there was an anticaricature advantage

    The relative contribution of facial form and facial motion to the perception of identity

    No full text
    Faces are dynamic objects that continuously move as we talk or laugh. Such facial motion can facilitate communication and can also carry information about gender, age and emotion. However, relatively little is known about how facial motion and facial form interact during the processing of facial identity (e.g. Hill Johnston 2001, Lander Bruce, 2000). By combining novel computer animation techniques with psychophysical methods, we have recently shown that non-rigid facial motion patterns applied to previously unfamiliar faces can bias the perception of identity (Knappmeyer et al. 2001). Here we further investigate this finding by systematically varying the form cue at training. We enhanced the form cue e.g. by caricaturing and adding individual skin texture, and reduced the form cue by morphing towards an average face. The results are discussed with respect to current cognitive and neural models of face perception
    corecore