173 research outputs found

    An enhance framework on hair modeling and real-time animation

    Get PDF
    Master'sMASTER OF SCIENC

    DeepSketchHair: Deep Sketch-based 3D Hair Modeling

    Full text link
    We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art

    An investigation of hair modelling and rendering techniques with emphasis on African hairstyles

    Get PDF
    Many computer graphics applications make use of virtual humans. Methods for modelling and rendering hair are needed so that hairstyles can be added to the virtual humans. Modelling and rendering hair is challenging due to the large number of hair strands and their geometric properties, the complex lighting effects that occur among the strands of hair, and the complexity and large variation of human hairstyles. While methods have been developed for generating hair, no methods exist for generating African hair, which differs from hair of other ethnic groups. This thesis presents methods for modelling and rendering African hair. Existing hair modelling and rendering techniques are investigated, and the knowledge gained from the investigation is used to develop or enhance hair modelling and rendering techniques to produce three different forms of hair commonly found in African hairstyles. The different forms of hair identified are natural curly hair, straightened hair, and braids or twists of hair. The hair modelling techniques developed are implemented as plug-ins for the graphics program LightWave 3D. The plug-ins developed not only model the three identified forms of hair, but also add the modelled hair to a model of a head, and can be used to create a variety of African hairstyles. The plug-ins significantly reduce the time spent on hair modelling. Tests performed show that increasing the number of polygons used to model hair increases the quality of the hair produced, but also increases the rendering time. However, there is usually an upper bound to the number of polygons needed to produce a reasonable hairstyle, making it feasible to add African hairstyles to virtual humans. The rendering aspects investigated include hair illumination, texturing, shadowing and antialiasing. An anisotropic illumination model is developed that considers the properties of African hair, including the colouring, opacity and narrow width of the hair strands. Texturing is used in several instances to create the effect of individual strands of hair. Results show that texturing is useful for representing many hair strands because the density of the hair in a texture map does not have an effect on the rendering time. The importance of including a shadowing technique and applying an anti-aliasing method when rendering hair is demonstrated. The rendering techniques are implemented using the RenderMan Interface and Shading Language. A number of complete African hairstyles are shown, demonstrating that the techniques can be used to model and render African hair successfully.GNU Ghostscript 7.0

    Image-Based Approaches to Hair Modeling

    Get PDF
    Hair is a relevant characteristic of virtual characters, therefore the modeling of plausible facial hair and hairstyles is an essential step in the generation of computer generated (CG) avatars. However, the inherent geometric complexity of hair together with the huge number of filaments of an average human head make the task of modeling hairstyles a very challenging one. To date this is commonly a manual process which requires artist skills or very specialized and costly acquisition software. In this work we present an image-based approach to model facial hair (beard and eyebrows) and (head) hairstyles. Since facial hair is usually much shorter than the average head hair two different methods are resented, adapted to the characteristics of the hair to be modeled. Facial hair is modeled using data extracted from facial texture images and missing information is inferred by means of a database-driven prior model. Our hairstyle reconstruction technique employs images of the hair to be modeled taken with a thermal camera. The major advantage of our thermal image-based method over conventional image-based techniques lies on the fact that during data capture the hairstyle is "lit from the inside": the thermal camera captures heat irradiated by the head and actively re-emitted by the hair filaments almost isotropically. Following this approach we can avoid several issues of conventional image-based techniques, like shadowing or anisotropy in reflectance. The presented technique requires minimal user interaction and a simple acquisition setup. Several challenging examples demonstrate the potential of the proposed approach

    Hairstyle modelling based on a single image.

    Get PDF
    Hair is an important feature to form character appearance in both film and video game industry. Hair grooming and combing for virtual characters was traditionally an exclusive task for professional designers because of its requirements for both technical manipulation and artistic inspiration. However, this manual process is time-consuming and further limits the flexibility of customised hairstyle modelling. In addition, it is hard to manipulate virtual hairstyle due to intrinsic hair shape. The fast development of related industrial applications demand an intuitive tool for efficiently creating realistic hairstyle for non-professional users. Recently, image-based hair modelling has been investigated for generating realistic hairstyle. This thesis demonstrates a framework Struct2Hair that robustly captures a hairstyle from a single portrait input. Specifically, the 2D hair strands are traced from the input with the help of image processing enhancement first. Then the 2D hair sketch of a hairstyle on a coarse level is extracted from generated 2D hair strands by clustering. To solve the inherently ill-posed single-view reconstruction problem, a critical hair shape database has been built by analysing an existing hairstyle model database. The critical hair shapes is a group of hair strands which possess similar shape appearance and close space location. Once the prior shape knowledge is prepared, the hair shape descriptor (HSD) is introduced to encode the structure of the target hairstyle. The HSD is constructed by retrieving and matching corresponding critical hair shape centres in the database. The full-head hairstyle is reconstructed by uniformly diffusing the hair strands on the scalp surface under the guidance of extracted HSD. The produced results are evaluated and compared with the state-of-the-art image based hair modelling methods. The findings of this thesis lead to some promising applications such as blending hairstyles to populate novel hair model, editing hairstyle (adding fringe hair, curling and cutting/extending hairstyle) and a case study of Bas-relief hair modelling on pre-processed hair images

    Angle Resolved Polarization and Vibrational Studies of Transition Metal Trichalcogenides and Related Alloys

    Get PDF
    abstract: A new class of layered materials called the transition metal trichalcogenides (TMTCs) exhibit strong anisotropic properties due to their quasi-1D nature. These 2D materials are composed of chain-like structures which are weakly bound to form planar sheets with highly directional properties. The vibrational properties of three materials from the TMTC family, specifically TiS3, ZrS3, and HfS3, are relatively unknown and studies performed in this work elucidates the origin of their Raman characteristics. The crystals were synthesized through chemical vapor transport prior to mechanical exfoliation onto Si/SiO¬2 substrates. XRD, AFM, and Raman spectroscopy were used to determine the crystallinity, thickness, and chemical signature of the exfoliated crystals. Vibrational modes and anisotropic polarization are investigated through density functional theory calculations and angle-resolved Raman spectroscopy. Particular Raman modes are explored in order to correlate select peaks to the b-axis crystalline direction. Mode III vibrations for TiS3, ZrS3, and HfS3 are shared between each material and serves as a unique identifier of the crystalline orientation in MX3 materials. Similar angle-resolved Raman studies were conducted on the novel Nb0.5Ti0.5S3 alloy material grown through chemical vapor transport. Results show that the anisotropy direction is more difficult to determine due to the randomization of quasi-1D chains caused by defects that are common in 2D alloys. This work provides a fundamental understanding of the vibrational properties of various TMTC materials which is needed to realize applications in direction dependent polarization and linear dichroism.Dissertation/ThesisMasters Thesis Materials Science and Engineering 201

    Coinage Metal Silylphosphido Complexes Stabilized by N-Heterocyclic Carbene Ligands

    Get PDF
    N-Heterocyclic carbenes (NHCs) are strong σ-donating ligands and thus, promising candidates for decorating and stabilizing metal-phosphide nanoclusters. While much research has been focused on the coordination of NHC ligands to different coinage metal centers in order to synthesize mononuclear organometallic complexes, their application in nanocluster chemistry has been relatively unexplored. The work described in this thesis involves employment of NHC ligands for stabilizing coinage metal t-butylthiolate and silylphosphido complexes. These complexes are promising molecular precursors for formation of larger NHC-stabilized nanoclusters. In particular, the ligation of NHCs to [CuStBu] and [AgStBu] was developed as an alternative to PR3 ligands as solubilizing reagents for these coordination polymers in order to form polynuclear copper and silver t-butylthiolate clusters. 1,3-Di-isopropylbenzimidazol-2-ylidene (iPr2-bimy) and 1,3-di-isopropyl-4,5-dimethylimidazol-2-ylidene (iPr2-mimy) were ligated to [CuStBu] and [AgStBu] forming [Cu4(StBu)4(iPr2-bimy)2] (1), [Cu4(StBu)4(iPr2-mimy)2] (2), [Ag4(StBu)4(iPr2-bimy)2] (5) and [Ag5(StBu)6][Ag(iPr2-mimy)2] (6). For comparison, the trialkyl phosphines PnPr3 and PiPr3 were also used to solubilize [AgStBu] and [CuStBu] to form copper and silver t-butylthiolate clusters. [Cu4(StBu)4(PnPr3)2] (3), [Cu4(StBu)4(PiPr3)2] (4), [Ag4(StBu)4(PnPr3)2] (7), and [Ag6(StBu)6(PiPr3)2] (8) were thus formed upon reaction with [CuStBu] and [AgStBu]. The synthesized complexes have been characterized via spectroscopic and crystallographic methods. The molecular structures of the clusters, which can vary according to the ligand type, are described. Moreover, the facile preparation and structural characterization of [M6{P(SiMe3)2}6] (M = Ag, Cu) is reported. These complexes show limited stability towards solvent loss at ambient temperature; however, NHC ligands were used to synthesize more thermally stable metal-silylphosphido compounds. iPr2-bimy and 1,3-bis(2,6-diisopropylphenyl)imidazol-2-ylidene (IPr) are found to be excellent ligands to stabilize silylphosphido-copper compounds that show higher stability when compared to [Cu6{P(SiMe3)2}6] (9). Furthermore, iPr2-bimy is found to be an excellent ligand for the stabilization of silver–phosphorus polynuclear complexes. The straightforward preparation and characterization of the clusters [Ag12(PSiMe3)6(iPr2-bimy)6] (13) and [Ag26P2(PSiMe3)10(iPr2-bimy)8] (14) are described, representing the first examples of such structurally characterized, higher nuclearity complexes obtained using this class of ligands. Lastly, iPr2-bimy and IPr were successfully utilized in the facile preparation of four gold silylphosphido complexes: [IPrAuP(Ph)SiMe3](15), [IPrAuP(SiMe3)2] (16), [(iPr2-bimy)AuP(Ph)SiMe3] (17), and [(iPr2-bimy)AuP(SiMe3)2](18). Furthermore, reactivity of the P−Si bond in 15 and 17 was explored via the addition of PhC(O)Cl. The product of such reactions was the formation of [(IPrAu)2PPhC(O)Ph][AuCl2] (19) and PPh(C(O)Ph)2 (20), respectively, as well as the elimination of ClSiMe3

    Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion

    Get PDF
    International audienceRealistic animation of long human hair is difficult due to the number of hair strands and to the complexity of their interactions. Existing methods remain limited to smooth, uniform, and relatively simple hair motion. We present a powerful adaptive approach to modeling dynamic clustering behavior that characterizes complex long-hair motion. The Adaptive Wisp Tree (AWT) is a novel control structure that approximates the large-scale coherent motion of hair clusters as well as small-scaled variation of individual hair strands. The AWT also aids computation efficiency by identifying regions where visible hair motions are likely to occur. The AWT is coupled with a multiresolution geometry used to define the initial hair model. This combined system produces stable animations that exhibit the natural effects of clustering and mutual hair interaction. Our results show that the method is applicable to a wide variety of hair styles
    corecore