425 research outputs found

    Tissue thickness measurement tool for craniofacial reconstruction

    Get PDF
    Craniofacial Reconstruction is a method of recreating the appearance of the face on the skull of a deceased individual for identification purposes. Older clay methods of reconstruction are inaccurate, time consuming and inflexible. The tremendous increase in the processing power of the computers and rapid strides in visualization can be used to perform the reconstruction, saving time and providing greater accuracy and flexibility, without the necessity for a skillful modeler.;This thesis introduces our approach to computerized 3D craniofacial reconstruction. Three phases have been identified. The first phase of the project is to generate a facial tissue thickness database. In the second phase this database along with a 3D facial components database is to be used to generate a generic facial mask which is draped over the skull to recreate the facial appearance. This face is to be identified from a database of images in the third phase.;Tissue thickness measurements are necessary to generate the facial model over the skull. The thesis emphasis is on the first phase of the project. An automated facial tissue thickness measurement tool (TTMT) has been developed to populate this database

    A survey on personal computer applications in industrial design process

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Industrial Design, Izmir, 1999Includes bibliographical references (leaves: 157-162)Text in English, Abstract: Turkish and Englishxii, 194 leavesIn this thesis, computer aided design systems are studied from the industrial designer's point of view. The study includes industrial design processes, computer aided design systems and the integration aspects.The technical issues are priorly studied, including current hardware and software technologies. The pure technical concepts are tried to be supported with real-world examples and graphics. Several important design software are examined, whether by personal practice or by literature research, depending on the availability of the software.Finally, the thesis include a case study, a 17" LCD computer monitor designed with a set of graphic programs including two-dimensional and three-dimensional packages.Keywords: Computers, industrial design methods, design software, computer aided design

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044

    From Fantasy to Virtual Reality: An Exploration of Modeling, Rigging and Animating Characters for Video Games

    Get PDF
    In the last few decades video games have quickly become one of the most popular forms of entertainment around the world. This can be linked to the improvement of computer systems and graphics which now allow for authentic and highly detailed computer generated characters. This project examines how these characters are modeled and developed. The examination of game characters entails a brief history of video games and their aesthetics. The foundations of character design are discussed and 3D modeling of a character is explored in detail. Finally, rigging or skeleton placement is investigated in order to animate the characters designed for this study. The result is two animated characters, which can be incorporated into several of the current and popular game engines. By the end of this paper the reader should have a fundamental understanding of how a video game character is designed, modeled, rigged, and animated

    The Effects of Object Shape, Fidelity, Color, and Luminance on Depth Perception in Handheld Mobile Augmented Reality

    Full text link
    Depth perception of objects can greatly affect a user's experience of an augmented reality (AR) application. Many AR applications require depth matching of real and virtual objects and have the possibility to be influenced by depth cues. Color and luminance are depth cues that have been traditionally studied in two-dimensional (2D) objects. However, there is little research investigating how the properties of three-dimensional (3D) virtual objects interact with color and luminance to affect depth perception, despite the substantial use of 3D objects in visual applications. In this paper, we present the results of a paired comparison experiment that investigates the effects of object shape, fidelity, color, and luminance on depth perception of 3D objects in handheld mobile AR. The results of our study indicate that bright colors are perceived as nearer than dark colors for a high-fidelity, simple 3D object, regardless of hue. Additionally, bright red is perceived as nearer than any other color. These effects were not observed for a low-fidelity version of the simple object or for a more-complex 3D object. High-fidelity objects had more perceptual differences than low-fidelity objects, indicating that fidelity interacts with color and luminance to affect depth perception. These findings reveal how the properties of 3D models influence the effects of color and luminance on depth perception in handheld mobile AR and can help developers select colors for their applications.Comment: 9 pages, In proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202

    Image-based Material Editing

    Get PDF
    Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object\u27s background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques

    Research on 3D reconstruction based on 2D face images.

    Get PDF
    3D face reconstruction is a popular research area in the field of computer vision and has a wide range of applications in various fields such as animation design, virtual reality, medical guidelines, and face recognition. Current commercial 3D face reconstruction generally relies on large image scanning equipment to fuse multiple images through sensors for 3D face reconstruction. However, this approach requires manual modelling, which is costly in terms of time and money, and expensive in terms of equipment, making it unpopular in practical applications. Compared to 3D face construction with multiple images, the single-image approach reduces computational time and economic costs, is relatively simple to implement and does not require specific Hardware equipment. Therefore, we focus on single-image approach in this dissertation and contribute in terms of research novelty and practical use. The main work is as follows A unique pre-processing process is designed to separate face alignment from face reconstruction. In this dissertation, the Active Shape Model (ASM) algorithm is used for face alignment to detect the face feature points in the image. The face data is posing corrected so that the corrected face is better adapted to the face pose of the UV-Position Map. The UV coordinates are then used to map the 3D information onto the 2D image, creating a UV-3D mapping map. In order to enhance the effect, this dissertation also does face cropping to fill the whole space as much as possible with face data and expands the face dataset using rotation, scaling, panning and noise addition. Improving the neural network model by using the idea of residual learning to train the network model incrementally, emphasizing the reconstruction of the model for deep information. Face data characteristics are first extracted using the encoding and decoding layers, and then face features are learned using the residual learning layer. By comparing with the previous algorithm, we achieved a considerable lead on the 300W-LP face dataset, with a 35% reduction in NME error accumulation over the RPN algorithm. Based on the pre-processing methods and residual structures we proposed, the experimental results have shown good performance on 3D reconstruction of faces. The end-to-end approach based on deep learning achieves better reconstruction quality and accuracy compared to traditional, model-based face reconstruction methods

    Railway bridge geometry assessment supported by cutting-edge reality capture technologies and 3D as-designed models

    Get PDF
    Documentation of structural visual inspections is necessary for its monitoring, maintenance, and decision about its rehabilitation, and structural strengthening. In recent times, close-range photogrammetry (CRP) based on unmanned aerial vehicles (UAVs) and terrestrial laser scanners (TLS) have greatly improved the survey phase. These technologies can be used independently or in combination to provide a 3D as-is image-based model of the railway bridge. In this study, TLS captured the side and bottom sections of the deck, while the CRP-based UAV captured the side and top sections of the deck, and the track. The combination of post-processing techniques enabled the merging of TLS and CRP models, resulting in the creation of an accurate 3D representation of the complete railway bridge deck. Additionally, a 3D as-designed model was developed based on the design plans of the bridge. The as-designed model is compared to the as-is model through a 3D digital registration. The comparison allows the detection of dimensional deviation and surface alignments. The results reveal slight deviations in the structural dimension with a global average value of 9 mm.The authors would like to thank the financial support from: Base Funding—UIDB/04708/ 2020 and Programmatic Funding—UIDP/04708/2020 of the CONSTRUCT—“Instituto de I&D em Estruturas e ConstruçÔes, as well as ISISE (UIDB/04029/2020) and ARISE (LA/P/0112/2020)”—funded by national funds through the FCT/MCTES (PIDDAC). Additionally, the support by the doctoral grant UI/BD/150970/2021 (to Rafael Cabral)—Portuguese Science Foundation, FCT/MCTES. Furthermore, this work is framed within the project “Intelligent structural condition assessment of existing steel railway bridges” financed by the bilateral agreement FCT-NAWA (2022-23), as well as project “FERROVIA 4.0”, with reference to POCI-01-0247-FEDER-046111, co-funded by the European Regional Development Fund (ERDF), through the Operational Program for Competitiveness and Internationalization (COMPETE 2020) and the Lisbon Regional Operational Program (LISBOA 2020), under the PORTUGAL 2020 Partnership Agreement, as well as “NEXUS: Innovation Pact Digital and Green Transition—Transports, Logistics and Mobility”, nr. C645112083-00000059, investment project nr. 53, financed by the Recovery and Resilience Plan (PRR) and by European Union—NextGeneration EU

    Doctor of Philosophy

    Get PDF
    dissertationConfocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases
    • 

    corecore