12 research outputs found

    Accuracy Assessment of Low Cost UAV Based City Modelling for Urban Planning

    Get PDF
    This paper presents an Unmanned Aerial Vehicles (UAV) based 3D city modelling approach to be used in managing and planning urban areas. While the urban growth is rapidly increasing in many places of the world, the conventional techniques do not respond to the changing environment simultaneously. For effective planning, high-resolution remote sensing is a tool for the production of 3D digital city models. In this study, it is aimed at designing the remote sensing by UAV through urban terrain. Using all the information produced from UAV imagery, high-accurate 3D city models are obtained. The analysis of XYZ data of the derived from 3D model using UAV photogrammetry revealed similar products as the terrestrial surveys which are commonly used for the last development plans and city maps. The experimental results show the effectiveness of the UAV-based 3D city modelling. The assessed accuracy of the UAV photogrammetry proved that urban planners can use it as the main tool of data collection for boundary mapping, changes monitoring and topographical surveying instead of GPS/GNSS surveying

    Can gender categorization influence the perception of animated virtual humans?

    Full text link
    Animations have become increasingly realistic with the evolution of Computer Graphics (CG). In particular, human models and behaviors were represented through animated virtual humans, sometimes with a high level of realism. In particular, gender is a characteristic that is related to human identification, so that virtual humans assigned to a specific gender have, in general, stereotyped representations through movements, clothes, hair and colors, in order to be understood by users as desired by designers. An important area of study is finding out whether participants' perceptions change depending on how a virtual human is visually presented. Findings in this area can help the industry to guide the modeling and animation of virtual humans to deliver the expected impact to the audience. In this paper, we reproduce, through CG, a perceptual study that aims to assess gender bias in relation to a simulated baby. In the original study, two groups of people watched the same video of a baby reacting to the same stimuli, but one group was told the baby was female and the other group was told the same baby was male, producing different perceptions. The results of our study with virtual babies were similar to the findings with real babies. First, it shows that people's emotional response change depending on the character gender attribute, in this case the only difference was the baby's name. Our research indicates that by just informing the name of a virtual human can be enough to create a gender perception that impact the participant emotional answer.Comment: 8 pages, 1 figure, 2 table

    Faces and hands : modeling and animating anatomical and photorealistic models with regard to the communicative competence of virtual humans

    Get PDF
    In order to be believable, virtual human characters must be able to communicate in a human-like fashion realistically. This dissertation contributes to improving and automating several aspects of virtual conversations. We have proposed techniques to add non-verbal speech-related facial expressions to audiovisual speech, such as head nods for of emphasis. During conversation, humans experience shades of emotions much more frequently than the strong Ekmanian basic emotions. This prompted us to develop a method that interpolates between facial expressions of emotions to create new ones based on an emotion model. In the area of facial modeling, we have presented a system to generate plausible 3D face models from vague mental images. It makes use of a morphable model of faces and exploits correlations among facial features. The hands also play a major role in human communication. Since the basis for every realistic animation of gestures must be a convincing model of the hand, we devised a physics-based anatomical hand model, where a hybrid muscle model drives the animations. The model was used to visualize complex hand movement captured using multi-exposure photography.Um überzeugend zu wirken, müssen virtuelle Figuren auf dieselbe Art wie lebende Menschen kommunizieren können. Diese Dissertation hat das Ziel, verschiedene Aspekte virtueller Unterhaltungen zu verbessern und zu automatisieren. Wir führten eine Technik ein, die es erlaubt, audiovisuelle Sprache durch nichtverbale sprachbezogene Gesichtsausdrücke zu bereichern, wie z.B. Kopfnicken zur Betonung. Während einer Unterhaltung empfinden Menschen weitaus öfter Emotionsnuancen als die ausgeprägten Ekmanschen Basisemotionen. Dies bewog uns, eine Methode zu entwickeln, die Gesichtsausdrücke für neue Emotionen erzeugt, indem sie, ausgehend von einem Emotionsmodell, zwischen bereits bekannten Gesichtsausdrücken interpoliert. Auf dem Gebiet der Gesichtsmodellierung stellten wir ein System vor, um plausible 3D-Gesichtsmodelle aus vagen geistigen Bildern zu erzeugen. Dieses System basiert auf einem Morphable Model von Gesichtern und nutzt Korrelationen zwischen Gesichtszügen aus. Auch die Hände spielen ein große Rolle in der menschlichen Kommunikation. Da der Ausgangspunkt für jede realistische Animation von Gestik ein überzeugendes Handmodell sein muß, entwikkelten wir ein physikbasiertes anatomisches Handmodell, bei dem ein hybrides Muskelmodell die Animationen antreibt. Das Modell wurde verwendet, um komplexe Handbewegungen zu visualisieren, die aus mehrfach belichteten Photographien extrahiert worden waren

    A flexible and reusable framework for dialogue and action management in multi-party discourse

    Get PDF
    This thesis describes a model for goal-directed dialogue and activity control in real-time for multiple conversation participants that can be human users or virtual characters in multimodal dialogue systems and a framework implementing the model. It is concerned with two genres: task-oriented systems and interactive narratives. The model is based on a representation of participant behavior on three hierarchical levels: dialogue acts, dialogue games, and activities. Dialogue games allow to take advantage of social conventions and obligations to model the basic structure of dialogues. The interactions can be specified and implemented using reoccurring elementary building blocks. Expectations about future behavior of other participants are derived from the state of active dialogue games; this can be useful for, e. g., input disambiguation. The knowledge base of the system is defined in an ontological format and allows individual knowledge and personal traits for the characters. The Conversational Behavior Generation Framework implements the model. It coordinates a set of conversational dialogue engines (CDEs), where each participant is represented by one CDE. The virtual characters can act autonomously, or semi-autonomously follow goals assigned by an external story module (Narrative Mode). The framework allows combining alternative specification methods for the virtual characters\u27; activities (implementation in a general-purpose programming language, by plan operators, or in the specification language Lisa that was developed for the model). The practical viability of the framework was tested and demonstrated via the realization of three systems with different purposes and scope.Diese Arbeit beschreibt ein Modell für zielgesteuerte Dialog- und Ablaufsteuerung in Echtzeit für beliebig viele menschliche Konversationsteilnehmer und virtuelle Charaktere in multimodalen Dialogsystemen, sowie eine Softwareumgebung, die das Modell implementiert. Dabei werden zwei Genres betrachtet: Task-orientierte Systeme und interaktive Erzählungen. Das Modell basiert auf einer Repräsentation des Teilnehmerverhaltens auf drei hierarchischen Ebenen: Dialogakte, Dialogspiele und Aktivitäten. Dialogspiele erlauben es, soziale Konventionen und Obligationen auszunutzen, um die Dialoge grundlegend zu strukturieren. Die Interaktionen können unter Verwendung wiederkehrender elementarer Bausteine spezifiziert und programmtechnisch implementiert werden. Aus dem Zustand aktiver Dialogspiele werden Erwartungen an das zukünftige Verhalten der Dialogpartner abgeleitet, die beispielsweise für die Desambiguierung von Eingaben von Nutzen sein können. Die Wissensbasis des Systems ist in einem ontologischen Format definiert und ermöglicht individuelles Wissen und persönliche Merkmale für die Charaktere. Das Conversational Behavior Generation Framework implementiert das Modell. Es koordiniert eine Menge von Dialog-Engines (CDEs), wobei jedem Teilnehmer eine CDE zugeordet wird, die ihn repräsentiert. Die virtuellen Charaktere können autonom oder semi-autonom nach den Zielvorgaben eines externen Storymoduls agieren (Narrative Mode). Das Framework erlaubt die Kombination alternativer Spezifikationsarten für die Aktivitäten der virtuellen Charaktere (Implementierung in einer allgemeinen Programmiersprache, durch Planoperatoren oder in der für das Modell entwickelten Spezifikationssprache Lisa). Die Praxistauglichkeit des Frameworks wurde anhand der Realisierung dreier Systeme mit unterschiedlichen Zielsetzungen und Umfang erprobt und erwiesen

    Modeling and Animating Virtual Humans for Real-Time Applications

    No full text
    International audienceWe report on the workflow for the creation ofrealistic virtual anthropomorphic characters. 3D-models ofhuman heads have been reconstructed from real people byfollowing a structured light approach to 3D-reconstruction. Wedescribe how these high-resolution models have been simplifiedand articulated with blend shape and mesh skinning techniques toensure real-time animation. The full-body models have beencreated manually based on photographs. We present a system forcapturing whole body motions, including the fingers, based on anoptical motion capture system with 6 DOF rigid bodies andcybergloves. The motion capture data was processed in onesystem, mapped to a virtual character and visualized in real-time.We developed tools and methods for quick post processing. Todemonstrate the viability of our system, we captured a libraryconsisting of more than 90 gestures
    corecore