566 research outputs found

    Real-time human performance capture and synthesis

    Get PDF
    Most of the images one finds in the media, such as on the Internet or in textbooks and magazines, contain humans as the main point of attention. Thus, there is an inherent necessity for industry, society, and private persons to be able to thoroughly analyze and synthesize the human-related content in these images. One aspect of this analysis and subject of this thesis is to infer the 3D pose and surface deformation, using only visual information, which is also known as human performance capture. Human performance capture enables the tracking of virtual characters from real-world observations, and this is key for visual effects, games, VR, and AR, to name just a few application areas. However, traditional capture methods usually rely on expensive multi-view (marker-based) systems that are prohibitively expensive for the vast majority of people, or they use depth sensors, which are still not as common as single color cameras. Recently, some approaches have attempted to solve the task by assuming only a single RGB image is given. Nonetheless, they can either not track the dense deforming geometry of the human, such as the clothing layers, or they are far from real time, which is indispensable for many applications. To overcome these shortcomings, this thesis proposes two monocular human performance capture methods, which for the first time allow the real-time capture of the dense deforming geometry as well as an unseen 3D accuracy for pose and surface deformations. At the technical core, this work introduces novel GPU-based and data-parallel optimization strategies in conjunction with other algorithmic design choices that are all geared towards real-time performance at high accuracy. Moreover, this thesis presents a new weakly supervised multiview training strategy combined with a fully differentiable character representation that shows superior 3D accuracy. However, there is more to human-related Computer Vision than only the analysis of people in images. It is equally important to synthesize new images of humans in unseen poses and also from camera viewpoints that have not been observed in the real world. Such tools are essential for the movie industry because they, for example, allow the synthesis of photo-realistic virtual worlds with real-looking humans or of contents that are too dangerous for actors to perform on set. But also video conferencing and telepresence applications can benefit from photo-real 3D characters, as they can enhance the immersive experience of these applications. Here, the traditional Computer Graphics pipeline for rendering photo-realistic images involves many tedious and time-consuming steps that require expert knowledge and are far from real time. Traditional rendering involves character rigging and skinning, the modeling of the surface appearance properties, and physically based ray tracing. Recent learning-based methods attempt to simplify the traditional rendering pipeline and instead learn the rendering function from data resulting in methods that are easier accessible to non-experts. However, most of them model the synthesis task entirely in image space such that 3D consistency cannot be achieved, and/or they fail to model motion- and view-dependent appearance effects. To this end, this thesis presents a method and ongoing work on character synthesis, which allow the synthesis of controllable photoreal characters that achieve motion- and view-dependent appearance effects as well as 3D consistency and which run in real time. This is technically achieved by a novel coarse-to-fine geometric character representation for efficient synthesis, which can be solely supervised on multi-view imagery. Furthermore, this work shows how such a geometric representation can be combined with an implicit surface representation to boost synthesis and geometric quality.In den meisten Bildern in den heutigen Medien, wie dem Internet, BĂŒchern und Magazinen, ist der Mensch das zentrale Objekt der Bildkomposition. Daher besteht eine inhĂ€rente Notwendigkeit fĂŒr die Industrie, die Gesellschaft und auch fĂŒr Privatpersonen, die auf den Mensch fokussierten Eigenschaften in den Bildern detailliert analysieren und auch synthetisieren zu können. Ein Teilaspekt der Anaylse von menschlichen Bilddaten und damit Bestandteil der Thesis ist das Rekonstruieren der 3D-Skelett-Pose und der OberflĂ€chendeformation des Menschen anhand von visuellen Informationen, was fachsprachlich auch als Human Performance Capture bezeichnet wird. Solche Rekonstruktionsverfahren ermöglichen das Tracking von virtuellen Charakteren anhand von Beobachtungen in der echten Welt, was unabdingbar ist fĂŒr Applikationen im Bereich der visuellen Effekte, Virtual und Augmented Reality, um nur einige Applikationsfelder zu nennen. Nichtsdestotrotz basieren traditionelle Tracking-Methoden auf teuren (markerbasierten) Multi-Kamera Systemen, welche fĂŒr die Mehrheit der Bevölkerung nicht erschwinglich sind oder auf Tiefenkameras, die noch immer nicht so gebrĂ€uchlich sind wie herkömmliche Farbkameras. In den letzten Jahren gab es daher erste Methoden, die versuchen, das Tracking-Problem nur mit Hilfe einer Farbkamera zu lösen. Allerdings können diese entweder die Kleidung der Person im Bild nicht tracken oder die Methoden benötigen zu viel Rechenzeit, als dass sie in realen Applikationen genutzt werden könnten. Um diese Probleme zu lösen, stellt die Thesis zwei monokulare Human Performance Capture Methoden vor, die zum ersten Mal eine Echtzeit-Rechenleistung erreichen sowie im Vergleich zu vorherigen Arbeiten die Genauigkeit von Pose und OberflĂ€che in 3D weiter verbessern. Der Kern der Methoden beinhaltet eine neuartige GPU-basierte und datenparallelisierte Optimierungsstrategie, die im Zusammenspiel mit anderen algorithmischen Designentscheidungen akkurate Ergebnisse erzeugt und dabei eine Echtzeit-Laufzeit ermöglicht. Daneben wird eine neue, differenzierbare und schwach beaufsichtigte, Multi-Kamera basierte Trainingsstrategie in Kombination mit einem komplett differenzierbaren Charaktermodell vorgestellt, welches ungesehene 3D PrĂ€zision erreicht. Allerdings spielt nicht nur die Analyse von Menschen in Bildern in Computer Vision eine wichtige Rolle, sondern auch die Möglichkeit, neue Bilder von Personen in unterschiedlichen Posen und Kamera- Blickwinkeln synthetisch zu rendern, ohne dass solche Daten zuvor in der RealitĂ€t aufgenommen wurden. Diese Methoden sind unabdingbar fĂŒr die Filmindustrie, da sie es zum Beispiel ermöglichen, fotorealistische virtuelle Welten mit real aussehenden Menschen zu erzeugen, sowie die Möglichkeit bieten, Szenen, die fĂŒr den Schauspieler zu gefĂ€hrlich sind, virtuell zu produzieren, ohne dass eine reale Person diese Aktionen tatsĂ€chlich ausfĂŒhren muss. Aber auch Videokonferenzen und Telepresence-Applikationen können von fotorealistischen 3D-Charakteren profitieren, da diese die immersive Erfahrung von solchen Applikationen verstĂ€rken. Traditionelle Verfahren zum Rendern von fotorealistischen Bildern involvieren viele mĂŒhsame und zeitintensive Schritte, welche Expertenwissen vorraussetzen und zudem auch Rechenzeiten erreichen, die jenseits von Echtzeit sind. Diese Schritte beinhalten das Rigging und Skinning von virtuellen Charakteren, das Modellieren von Reflektions- und Materialeigenschaften sowie physikalisch basiertes Ray Tracing. Vor Kurzem haben Deep Learning-basierte Methoden versucht, die Rendering-Funktion von Daten zu lernen, was in Verfahren resultierte, die eine Nutzung durch Nicht-Experten ermöglicht. Allerdings basieren die meisten Methoden auf Synthese-Verfahren im 2D-Bildbereich und können daher keine 3D-Konsistenz garantieren. DarĂŒber hinaus gelingt es den meisten Methoden auch nicht, bewegungs- und blickwinkelabhĂ€ngige Effekte zu erzeugen. Daher prĂ€sentiert diese Thesis eine neue Methode und eine laufende Forschungsarbeit zum Thema Charakter-Synthese, die es erlauben, fotorealistische und kontrollierbare 3D-Charakteren synthetisch zu rendern, die nicht nur 3D-konsistent sind, sondern auch bewegungs- und blickwinkelabhĂ€ngige Effekte modellieren und Echtzeit-Rechenzeiten ermöglichen. Dazu wird eine neuartige Grobzu- Fein-CharakterreprĂ€sentation fĂŒr effiziente Bild-Synthese von Menschen vorgestellt, welche nur anhand von Multi-Kamera-Daten trainiert werden kann. Daneben wird gezeigt, wie diese explizite Geometrie- ReprĂ€sentation mit einer impliziten OberflĂ€chendarstellung kombiniert werden kann, was eine bessere Synthese von geomtrischen Deformationen sowie Bildern ermöglicht.ERC Consolidator Grant 4DRepL

    Computational models for fusion of texture and color : a comparative study

    Get PDF
    Author name used in this publication: John H. Xin2005-2006 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Computer mediated colour fidelity and communication

    Get PDF
    Developments in technology have meant that computercontrolled imaging devices are becoming more powerful and more affordable. Despite their increasing prevalence, computer-aided design and desktop publishing software has failed to keep pace, leading to disappointing colour reproduction across different devices. Although there has been a recent drive to incorporate colour management functionality into modern computer systems, in general this is limited in scope and fails to properly consider the way in which colours are perceived. Furthermore, differences in viewing conditions or representation severely impede the communication of colour between groups of users. The approach proposed here is to provide WYSIWYG colour across a range of imaging devices through a combination of existing device characterisation and colour appearance modeling techniques. In addition, to further facilitate colour communication, various common colour notation systems are defined by a series of mathematical mappings. This enables both the implementation of computer-based colour atlases (which have a number of practical advantages over physical specifiers) and also the interrelation of colour represented in hitherto incompatible notations. Together with the proposed solution, details are given of a computer system which has been implemented. The system was used by textile designers for a real task. Prior to undertaking this work, designers were interviewed in order to ascertain where colour played an important role in their work and where it was found to be a problem. A summary of the findings of these interviews together with a survey of existing approaches to the problems of colour fidelity and communication in colour computer systems are also given. As background to this work, the topics of colour science and colour imaging are introduced

    Modelling colour properties for textiles

    Get PDF

    Reproduction of Historic Costumes Using 3D Apparel CAD

    Get PDF
    The progress of digital technology has brought about many changes. In the world of fashion, 3D apparel CAD is attracting attention as the most promising product which reduces time and cost in the design process through virtual simulation. This study highlights the potential of its technology and tries to extend the boundaries of its practical use through the simulation of historical dresses. The aim of this study is to identify the desirable factors for digital costume development, to produce accurate reproductions of digital clothing from historical sources and to investigate the implications of developing it for online exhibitory and educational materials. In order to achieve this, this study went through following process. First, the theoretical background of the digital clothing technology, 3D apparel CAD and museum and new media was established through the review of various materials. Second, the desirable concepts for effective digital costume were drawn from the analysis of earlier digital costume projects considering the constraints of costume collections and limitations of the data on museum websites: faithful reproduction, virtual fabrication and Interactive and stereographic display. Third, design development was carried out for the embodiment of the concepts based on two costumes in the Museum of London: (1) preparation which provided foundation data with physical counterparts, (2) digital reproduction which generated digital costumes with simulations and (3) application development where simulations were embodied into a platform. Fourth, evaluation of the outcomes was carried with different groups of participants. The evaluation results indicated that the outcomes functioned as an effective information delivery method and had suitability and applicability for exhibitory and educational use. However, further improvement particularly in the faithfulness of current digital costumes and more consideration for the concerns for virtual and intangible nature were pointed out to be required. Nevertheless digital costumes were reviewed to bring notable benefits in complete or partial replacement of the relics, presentation of invisible features, release of physical constraints on appreciation and provision of integrated and comprehensive information. This study expects that use of digital costumes may assist museums in terms of preservation, documentation and exhibition of costume collections giving new possibility especially to the endangered garments lying in the dark

    Enhancing the E-Commerce Experience through Haptic Feedback Interaction

    Get PDF
    The sense of touch is important in our everyday lives and its absence makes it difficult to explore and manipulate everyday objects. Existing online shopping practice lacks the opportunity for physical evaluation, that people often use and value when making product choices. However, with recent advances in haptic research and technology, it is possible to simulate various physical properties such as heaviness, softness, deformation, and temperature. The research described here investigates the use of haptic feedback interaction to enhance e-commerce product evaluation, particularly haptic weight and texture evaluation. While other properties are equally important, besides being fundamental to the shopping experience of many online products, weight and texture can be simulated using cost-effective devices. Two initial psychophysical experiments were conducted using free motion haptic exploration in order to more closely resemble conventional shopping. One experiment was to measure weight force thresholds and another to measure texture force thresholds. The measurements can provide better understanding of haptic device limitation for online shopping in terms of the availability of different stimuli to represent physical products. The outcomes of the initial psychophysical experimental studies were then used to produce various absolute stimuli that were used in a comparative experimental study to evaluate user experience of haptic product evaluation. Although free haptic exploration was exercised on both psychophysical experiments, results were relatively consistent with previous work on haptic discrimination. The threshold for weight force discrimination represented as downward forces was 10 percent. The threshold for texture force discrimination represented as friction forces was 14.1 percent, when using dynamic coefficient of friction at any level of static coefficient of friction. On the other hand, the comparative experimental study to evaluate user experience of haptic product information indicated that haptic product evaluation does not change user performance significantly. However, although there was an increase in the time taken to complete the task, the number of button click actions tended to decrease. The results showed that haptic product evaluation could significantly increase the confidence of shopping decision. Nevertheless, the availability of haptic product evaluation does not necessarily impose different product choices but it complements other selection criteria such as price and appearance. The research findings from this work are a first step towards exploring haptic-based environments in e-commerce environments. The findings not only lay the foundation for designing online haptic shopping but also provide empirical support to research in this direction

    Engineered repeating prints: computer-aided design approaches to achieving continuity of repeating print across a garment using digital engineered print method

    Get PDF
    This Master’s research investigated approaches for engineering of repeating prints using digital textile printing technology and universally available computer-aided design software. Current practices for alignment of designs in yardage printed fabrics at garment seams are wasteful and do not allow for mass customisation. This inefficiency can be overcome with engineered digital printing, a method that allows for an integration of prints with garment patterns to generate Ready-to-Print images. Engineered printing offers more cost-effective use of materials, improved visual appearance, potential for mass customisation and more sustainable manufacturing. Still, technical difficulties exist in the integration of prints with garment patterns. As a result, application for apparel is limited to non-repeating prints and one-off fashion show garments. The integration of repeating prints presents even more difficulties. However, the advances in digital printing and computer-aided design technologies call for an examination of possible approaches for achieving improved continuity of a repeating print across a garment. The research used a three-stage mixed method approach. The first qualitative stage examined current practices for design and printing of repeating prints. By undertaking Applied Thematic Analysis, the diversity of meanings assigned to words describing attributes of repeating prints as a result of historical and current usage were identified and the terminology consolidated. A taxonomy of repeating print attributes was established, with three levels observed: a superordinate level for a surface, a basic for a repeat, and a subordinate for a motif. Quantifiable attributes of repeating prints were assigned to each level. The analysis also suggested three potential directions for engineered repeating prints: Modularity Design, Flexible Tiling and Distortion. The second quantitative stage evaluated suggested design directions in four experimental studies: one for each of the directions and a final study combining all three directions to engineer repeating prints for a graded garment. Practical computer-aided design techniques, based on accessible Adobe software tools, were developed for integration of repeating prints with garment patterns. The techniques were then tested in comparison with mainstream printing practices. In each experiment, repeating print attributes were examined for their impact on the adaptability of repeating prints for engineered printing. All three directions were validated as suitable for engineering of repeating prints. Statistical analyses revealed relationships between repeating print attributes and their impact on the adaptability of repeating prints for the engineered printing method. The final stage analysed the combined results of the previous two stages. Existing computer- aided design solutions were found to offer opportunities regarding their ability to be integrated into current digital production for innovative and sustainable engineered printing. While the suggested techniques require knowledge of more advanced dynamic editing tools, the research highlights the benefits for both fashion and textile designers to utilise such tools in order to fully embrace the potential digital printing technology has to offer. The research also highlights the need for dedicated software solutions for integration of repeating prints with garment patterns. The findings on the impact of repeating print attributes on the adaptability for engineered printing can help in the development of dedicated software

    Parametric BIM-based Design Review

    Get PDF
    This research addressed the need for a new design review technology and method to express the tangible and intangible qualities of architectural experience of parametric BIM-based design projects. The research produced an innovative presentation tool by which parametric design is presented systematically. Focus groups provided assessments of the tool to reveal the usefulness of a parametric BIM-based design review method. The way in which we visualize architecture affects the way we design and perceive architectural form and performance. Contemporary architectural forms and systems are very complex, yet most architects who use Building Information Modeling (BIM) and generative design methods still embrace the two-dimensional 15th-century Albertian representational methods to express and review design projects. However, architecture cannot be fully perceived through a set of drawings that mediate our perception and evaluation of the built environment. The systematic and conventional approach of traditional architectural representation, in paper-based and slide-based design reviews, is not able to visualize phenomenal experience nor the inherent variation and versioning of parametric models. Pre-recorded walk-throughs with high quality rendering and imaging have been in use for decades, but high verisimilitude interactive walk-throughs are not commonly used in architectural presentations. The new generations of parametric and BIM systems allow for the quick production of variations in design by varying design parameters and their relationships. However, there is a lack of tools capable of conducting design reviews that engage the advantages of parametric and BIM design projects. Given the multitude of possibilities of in-game interface design, game-engines provide an opportunity for the creation of an interactive, parametric, and performance-oriented experience of architectural projects with multi-design options. This research has produced a concept for a dynamic presentation and review tool and method intended to meet the needs of parametric design, performance-based evaluation, and optimization of multi-objective design options. The concept is illustrated and tested using a prototype (Parametric Design Review, or PDR) based upon an interactive gaming environment equipped with a novel user interface that simultaneously engages the parametric framework, object parameters, multi-objective optimized design options and their performances with diagrammatic, perspectival, and orthographic representations. The prototype was presented to representative users in multiple focus group sessions. Focus group discussion data reveal that the proposed PDR interface was perceived to be useful if used for design reviews in both academic and professional practice settings

    Realistic Visualization of Animated Virtual Cloth

    Get PDF
    Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer GegenstĂ€nde ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zĂ€hlen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berĂŒcksichtigen gilt, von optischen Materialeigenschaften ĂŒber makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der EinfĂŒhrung in das Thema, ein weiter Überblick ĂŒber Ă€hnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von MaterialoberflĂ€chen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhĂ€ngig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene ReprĂ€sentation von Reflektionseigenschaften. Sie enthĂ€lt Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der OberflĂ€che hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und fĂŒr Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. WĂ€hrend die PCA die entscheidenen visuellen Aspekte der BTF erhĂ€lt, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erlĂ€utert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstĂŒtzen, muss die makroskopische Selbstabschattung integriert werden. FĂŒr die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und fĂŒr statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzufĂŒhren. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. ZusĂ€tzlich ist die Verwendung von verĂ€nderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren fĂŒr PrĂ€sentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der fĂŒr die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus fĂŒr eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ähnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ähnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ähnlichkeiten erlaubt dann SprĂŒnge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Übertragung von animierten 3D Daten ĂŒber bandbreitenlimitierte KanĂ€le wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema fĂŒr Geometriedaten. Ähnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich Ă€hnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte RekonstruktionsqualitĂ€t nach der Dekomprimierung
    • 

    corecore