1,844 research outputs found

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Appearance-design interfaces and tools for computer cinematography: Evaluation and application

    Get PDF
    We define appearance design as the creation and editing of scene content such as lighting and surface materials in computer graphics. The appearance design process takes a significant amount of time relative to other production tasks and poses difficult artistic challenges. Many user interfaces have been proposed to make appearance design faster, easier, and more expressive, but no formal validation of these interfaces had been published prior to our body of work. With a focus on novice users, we present a series of investigations into the strengths and weaknesses of various appearance design user interfaces. In particular, we develop an experimental methodology for the evaluation of representative user interface paradigms in the areas of lighting and material design. We conduct three user studies having subjects perform design tasks under controlled conditions. In these studies, we discover new insight into the effectiveness of each paradigm for novices measured by objective performance as well as subjective feedback. We also offer observations on common workflow and capabilities of novice users in these domains. We use the results of our lighting study to develop a new representation for artistic control of lighting, where light travels along nonlinear paths

    From undesired flaws to esthetic assets: A digital framework enabling artistic explorations of erroneous geometric features of robotically formed molds

    Get PDF
    Until recently, digital fabrication research in architecture has aimed to eliminate manufacturing errors. However, a novel notion has just been established—intentional computational infidelity. Inspired by this notion, we set out to develop means than can transform the errors in fabrication from an undesired complication to a creative opportunity. We carried out design experiment-based investigations, which culminated in the construction of a framework enabling fundamental artistic explorations of erroneous geometric features of robotically formed molds. The framework consists of digital processes, assisting in the explorations of mold errors, and physical processes, enabling the inclusion of physical feedback in digital explorations. Other complementary elements embrace an implementation workflow, an enabling digital toolset and a visual script demonstrating how imprecise artistic explorations can be included within the computational environment. Our framework application suggests that the exploration of geometrical errors aids the emergence of unprecedented design features that would not have arisen if error elimination were the ultimate design goal. Our conclusion is that welcoming error into the design process can reinstate the role of art, craft, and material agency therein. This can guide the practice and research of architectural computing onto a new territory of esthetic and material innovation

    Using a Dynamic Performance Management approach to reinforce the benefits of territorial strategic planning

    Get PDF
    The purpose of this paper is to present how system dynamics (SD) can be used to enrich performance management in local government and to foster a common shared view of the relevant system’s structure and behavior among stakeholders for territorial strategic planning. We begin by framing how dynamic complexity through SD modeling can support consensus building among different stakeholders within a territory, which moves beyond the traditional view of strategic planning within the context of a single jurisdiction. A Dynamic Performance Management (DPM) approach, as shown by our case-study, may help such players to overcome possible barriers to collaboration because of its support to better detect how pursuing a sustainable development in the territory’s performance impacts on the sustainability of each single institution belonging to the territory. This implies that territorial public agencies, e.g. municipalities, may understand and communicate to their stakeholders that long term performance cannot be only assessed in financial terms or by referring to output measures only, but also in relation to the outcomes that public services will be able to generate as value transferred to the territory. Likewise, the enterprises operating in a given territory should be enabled to detect how their own performance can be sustainable in the long run if they will be able to generate not only financial capital, but also social capital to the benefit of the other players belonging to the territory. Therefore, a key to implement a DPM approach for each of the players is to combine an institutional (single-player) with an inter-institutional (i.e. multi-players or territory) perspective with a view to enhancing performance and pursuing sustainable development. An inter-institutional perspective frames the territory (rather than a single institution) as the relevant system where to comprise and manage the cause and effect relationships between performance factors and strategic resources

    Improving Usability in Procedural Modeling

    Get PDF
    This work presents new approaches and algorithms for procedural modeling geared towards user convenience and improving usability, in order to increase artists’ productivity. Procedural models create geometry for 3D models from sets of rules. Existing approaches that allow to model trees, buildings, and terrain are reviewed and possible improvements are discussed. A new visual programming language for procedural modeling is discussed, where the user connects operators to visual programs called model graphs. These operators create geometry with textures, assign or evaluate variables or control the sequence of operations. When the user moves control points using the mouse in 3D space, the model graph is executed to change the geometry interactively. Thus, model graphs combine the creativity of freehand modeling with the power of programmed modeling while displaying the program structure more clearly than textbased approaches. Usability is increased as a result of these advantages. Also, an interactive editor for botanical trees is demonstrated. In contrast to previous tree modeling systems, we propose linking rules, parameters and geometry to semantic entities. This has the advantage that problems of associating parameters and instances are completely avoided. When an entity is clicked in the viewport, its parameters are displayed immediately, changes are applied to selected entities, and viewport editing operations are reflected in the parameter set. Furthermore, we store the entities in a hierarchical data structure and allow the user to activate recursive traversal via selection options for all editing operations. The user may choose to apply viewport or parameter changes to a single entity or many entities at once, and only the geometry for the affected entities needs to be updated. The proposed user interface simplifies the modeling process and increases productivity. Interactive editing approaches for 3D models often allow more precise control over a model than a global set of parameters that is used to generate a shape. However, usually scripted procedural modeling generates shapes directly from a fixed set of parameters, and interactive editing mostly uses a fixed set of tools. We propose to use scripts not only to generate models, but also for manipulating the models. A base script would set up the state of an object, and tool scripts would modify that state. The base script and the tool scripts generate geometry when necessary. Together, such a collection of scripts forms a template, and templates can be created for various types of objects. We examine how templates simplify the procedural modeling workflow by allowing for editing operations that are context-sensitive, flexible and powerful at the same time. Many algorithms have been published that produce geometry for fictional landscapes. There are algorithms which produce terrain with minimal setup time, allowing to adapt the level of detail as the user zooms into the landscape. However, these approaches lack plausible river networks, and algorithms that create eroded terrain with river networks require a user to supervise creation and minutes or hours of computation. In contrast to that, this work demonstrates an algorithm that creates terrain with plausible river networks and adaptive level of detail with no more than a few seconds of preprocessing. While the system can be configured using parameters, this text focuses on the algorithm that produces the rivers. However, integrating more tools for user-controlled editing of terrain would be possible.Verbesserung der Usability bei prozeduraler Modellierung Ziel der vorliegenden Arbeit ist es, prozedurale Modellierung durch neue neue AnsĂ€tze und Algorithmen einfacher, bequemer und anwendungsfreundlicher zu machen, und damit die ProduktivitĂ€t der KĂŒnstler zu erhöhen. Diese Anforderungen werden hĂ€ufig unter dem Stichwort Usability zusammengefasst. Prozedurale Modelle spezifizieren 3D-Modelle ĂŒber Regeln. Existierende AnsĂ€tze fĂŒr BĂ€ume, GebĂ€ude und Terrain werden untersucht und es werden mögliche Verbesserungen diskutiert. Eine neue visuelle Programmiersprache fĂŒr prozedurale Modelle wird vorgestellt, bei der Operatoren zu Modellgraphen verschaltet werden. Die Operatoren erzeugen texturierte Geometrie, weisen Variablen zu und werten sie aus, oder sie steuern den Ablauf der Operationen. Wenn der Benutzer Kontrollpunkte im Viewport mit der Maus verschiebt, wird der Modellgraph ausgefĂŒhrt, um interaktiv neue Geometrie fĂŒr das Modell zu erzeugen. Modellgraphen kombinieren die kreativen Möglichkeiten des freihĂ€ndigen Editierens mit der MĂ€chtigkeit der prozeduralen Modellierung. DarĂŒber hinaus sind Modellgraphen eine visuelle Programmiersprache und stellen die Struktur der Algorithmen deutlicher dar als textbasierte Programmiersprachen. Als Resultat dieser Verbesserungen erhöht sich die Usability. Ein interaktiver Editor fĂŒr botanische BĂ€ume wird ebenfalls vorgestellt. Im Gegensatz zu frĂŒheren AnsĂ€tzen schlagen wir vor, Regeln, Parameter und Geometrie zu semantischen EntitĂ€ten zu verschmelzen. Auf diese Weise werden Zuordnungsprobleme zwischen Parametern und deren Instanzen komplett vermieden. Wenn im Viewport eine Instanz angeklickt wird, werden sofort ihre Parameter angezeigt, alle Änderungen wirken sich direkt auf die betroffenen Instanzen aus, und Änderungen im Viewport werden sofort in den Parametern reflektiert. DarĂŒber hinaus werden die EntitĂ€ten in einer hierarchischen Datenstruktur gespeichert und alle Änderungen können rekursiv auf der Hierarchie ausgefĂŒhrt werden. Dem Benutzer werden Selektionsoptionen zur VerfĂŒgung gestellt, ĂŒber die er Änderungen an den Parametern oder Änderungen im Viewport an einzelnen oder vielen Instanzen gleichzeitig vornehmen kann. Anschließend muss das System nur die Geometrie der betroffenen Instanzen aktualisieren. Auch hier ist das Ziel, das User Interface möglichst an den BedĂŒrfnissen des Benutzers auszurichten, um Vereinfachungen und eine Erhöhung der ProduktivitĂ€t zu erreichen. Interaktive EditieransĂ€tze fĂŒr 3D-Modelle erlauben hĂ€ufig eine prĂ€zisere Kontrolle ĂŒber ein Modell als ein globaler Parametersatz, der fĂŒr die Erzeugung des Modells genutzt wird. Trotzdem erzeugen prozedurale Modellierskripte ihre Modelle meist direkt aus einem festen Parametersatz, wĂ€hrend interaktive Tools meist mit hartkodierten Operationen arbeiten. Wir schlagen vor, Skripte nicht nur zur Erzeugung der Modelle zu verwenden, sondern auch um die erzeugten Modelle zu editieren. Ein Basisskript soll die Statusinformationen eines Objekts anlegen, wĂ€hrend weitere Skripte diesen Status verĂ€ndern und passende Geometrie erzeugen. Diese Skripte bilden dann ein Template zum Erzeugen einer Klasse von Objekten. Verschiedene Objekttypen können jeweils ihr eigenes Template haben. Wir zeigen, wie Templates den Workflow mit prozeduralen Modellen vereinfachen können, indem Operationen geschaffen werden, die gleichzeitig kontext-sensitiv, mĂ€chtig und flexibel sind. Es existiert eine Reihe von Verfahren, um Geometrie fĂŒr synthetische Landschaften zu erzeugen. Ein Teil der Algorithmen erzeugt Geometrie mit minimaler Vorberechnung und erlaubt es, den Detailgrad der Landschaft interaktiv an die Perspektive anzupassen. Leider fehlen den so erzeugten Landschaften plausible Flussnetze. Algorithmen, die erodiertes Terrain mit Flussnetzen erzeugen, mĂŒssen aufwendig vom Benutzer ĂŒberwacht werden und brauchen Minuten oder Stunden Rechenzeit. Im Gegensatz dazu stellen wir einen Algorithmus vor, der plausible Flussnetze erzeugt, wĂ€hrend sich der Betrachter interaktiv durch die Szene bewegt. Das System kann ĂŒber Parameter gesteuert werden, aber der Fokus liegt auf dem Algorithmus zur Erzeugung der FlĂŒsse. Dennoch wĂ€re es möglich, Tools zum benutzergesteuerten Editieren von Terrain zu integrieren

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Investigating User Experiences Through Animation-based Sketching

    Get PDF
    • 

    corecore