1,504 research outputs found

    3D Textile Reconstruction based on KinectFusion and Synthesized Texture

    Get PDF
    Purpose The purpose of this paper is to present a novel framework of reconstructing the 3D textile model with synthesized texture. Design/methodology/approach First, a pipeline of 3D textile reconstruction based on KinectFusion is proposed to obtain a better 3D model. Second, “DeepTextures” method is applied to generate new textures for various three-dimensional textile models. Findings Experimental results show that the proposed method can conveniently reconstruct a three-dimensional textile model with synthesized texture. Originality/value A novel pipeline is designed to obtain 3D high-quality textile models based on KinectFusion. The accuracy and robustness of KinectFusion are improved via a turntable. To the best of the authors’ knowledge, this is the first paper to explore the synthesized textile texture for the 3D textile model. This is not only simply mapping the texture onto the 3D model, but also exploring the application of artificial intelligence in the field of textile. </jats:sec

    Visual Prototyping of Cloth

    Get PDF
    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture appearance models of cloth, especially when considering computer aided design of cloth. Previous methods can be used to produce highly realistic images, however, possibilities for cloth-editing are either restricted or require the measurement of large material databases to capture all variations of cloth samples. We propose a pipeline for designing the appearance of cloth directly based on those elements that can be changed within the production process. These are optical properties of fibers, geometrical properties of yarns and compositional elements such as weave patterns. We introduce a geometric yarn model, integrating state-of-the-art textile research. We further present an approach to reverse engineer cloth and estimate parameters for a procedural cloth model from single images. This includes the automatic estimation of yarn paths, yarn widths, their variation and a weave pattern. We demonstrate that we are able to match the appearance of original cloth samples in an input photograph for several examples. Parameters of our model are fully editable, enabling intuitive appearance design. Unfortunately, such explicit fiber-based models can only be used to render small cloth samples, due to large storage requirements. Recently, bidirectional texture functions (BTFs) have become popular for efficient photo-realistic rendering of materials. We present a rendering approach combining the strength of a procedural model of micro-geometry with the efficiency of BTFs. We propose a method for the computation of synthetic BTFs using Monte Carlo path tracing of micro-geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions (ABRDFs). By exploiting structural self-similarity, we can reduce rendering times by one order of magnitude. This is done in a process we call non-local image reconstruction, which has been inspired by non-local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for small BTFs. We finally propose a novel and general approach to physically accurate rendering of large cloth samples. By using a statistical volumetric model, approximating the distribution of yarn fibers, a prohibitively costly, explicit geometric representation is avoided. As a result, accurate rendering of even large pieces of fabrics becomes practical without sacrificing much generality compared to fiber-based techniques

    Woven Fabric Model Creation from a Single Image

    Get PDF
    We present a fast, novel image-based technique, for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. In order to recover our pseudo-BTF, we estimate the 3D structure and a set of yarn parameters (e.g. yarn width, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern, and from this build data set. In contrast however, we use a combination of image space analysis, frequency domain analysis and in challenging cases match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a DSLR camera under controlled uniform lighting, the woven cloth structure, depth and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results.Our pipeline first estimates the weave pattern, yarn characteristics and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a 3D structure for the fabric sample us- ing a first order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width and hence the volume occupied by the yarns, and colors.We demonstrate the efficacy of our approach through comparison images of test scenes rendered using: (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern and (d) the rendered result.<br/

    Making Tactile Textures with Predefined Affective Properties

    Get PDF
    A process for the design and manufacture of 3D tactile textures with predefined affective properties was developed. Twenty four tactile textures were manufactured. Texture measures from the domain of machine vision were used to characterize the digital representations of the tactile textures. To obtain affective ratings, the textures were touched, unseen, by 107 participants who scored them against natural, warm, elegant, rough, simple, and like, on a semantic differential scale. The texture measures were correlated with the participants' affective ratings using a novel feature subset evaluation method and a partial least squares genetic algorithm. Six measures were identified that are significantly correlated with human responses and are unlikely to have occurred by chance. Regression equations were used to select 48 new tactile textures that had been synthesized using mixing algorithms and which were likely to score highly against the six adjectives when touched by participants. The new textures were manufactured and rated by participants. It was found that the regression equations gave excellent predictive ability. The principal contribution of the work is the demonstration of a process, using machine vision methods and rapid prototyping, which can be used to make new tactile textures with predefined affective properties

    Enlightened Romanticism: Mary Gartside’s colour theory in the age of Moses Harris, Goethe and George Field

    Get PDF
    The aim of this paper is to evaluate the work of Mary Gartside, a British female colour theorist, active in London between 1781 and 1808. She published three books between 1805 and 1808. In chronological and intellectual terms Gartside can cautiously be regarded an exemplary link between Moses Harris, who published a short but important theory of colour in the second half of the eighteenth century, and J.W. von Goethe’s highly influential Zur Farbenlehre, published in Germany in 1810. Gartside’s colour theory was published privately under the disguise of a traditional water colouring manual, illustrated with stunning abstract colour blots (see example above). Until well into the twentieth century, she remained the only woman known to have published a theory of colour. In contrast to Goethe and other colour theorists in the late 18th and early 19th century Gartside was less inclined to follow the anti-Newtonian attitudes of the Romantic movement

    All-Optical tunability of metalenses infiltrated with liquid crystals

    Full text link
    Metasurfaces have been extensively engineered to produce a wide range of optical phenomena, allowing unprecedented control over the propagation of light. However, they are generally designed as single-purpose devices without a modifiable post-fabrication optical response, which can be a limitation to real-world applications. In this work, we report a nanostructured planar fused silica metalens permeated with a nematic liquid crystal (NLC) and gold nanoparticle solution. The physical properties of embedded NLCs can be manipulated with the application of external stimuli, enabling reconfigurable optical metasurfaces. We report all-optical, dynamic control of the metalens optical response resulting from thermo-plasmonic induced changes of the NLC solution associated with the nematic-isotropic phase transition. A continuous and reversible tuning of the metalens focal length is experimentally demonstrated, with a variation of 80 um (0.16% of the 5 cm nominal focal length) along the optical axis. This is achieved without direct mechanical or electrical manipulation of the device. The reconfigurable properties are compared with corroborating numerical simulations of the focal length shift and exhibit close correspondence.Comment: Main tex

    Comparison of depth cameras for three-dimensional Reconstruction in Medicine

    Get PDF
    KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
    • 

    corecore