98 research outputs found

    2D and 3D surface image processing algorithms and their applications

    Get PDF
    This doctoral dissertation work aims to develop algorithms for 2D image segmentation application of solar filament disappearance detection, 3D mesh simplification, and 3D image warping in pre-surgery simulation. Filament area detection in solar images is an image segmentation problem. A thresholding and region growing combined method is proposed and applied in this application. Based on the filament area detection results, filament disappearances are reported in real time. The solar images in 1999 are processed with this proposed system and three statistical results of filaments are presented. 3D images can be obtained by passive and active range sensing. An image registration process finds the transformation between each pair of range views. To model an object, a common reference frame in which all views can be transformed must be defined. After the registration, the range views should be integrated into a non-redundant model. Optimization is necessary to obtain a complete 3D model. One single surface representation can better fit to the data. It may be further simplified for rendering, storing and transmitting efficiently, or the representation can be converted to some other formats. This work proposes an efficient algorithm for solving the mesh simplification problem, approximating an arbitrary mesh by a simplified mesh. The algorithm uses Root Mean Square distance error metric to decide the facet curvature. Two vertices of one edge and the surrounding vertices decide the average plane. The simplification results are excellent and the computation speed is fast. The algorithm is compared with six other major simplification algorithms. Image morphing is used for all methods that gradually and continuously deform a source image into a target image, while producing the in-between models. Image warping is a continuous deformation of a: graphical object. A morphing process is usually composed of warping and interpolation. This work develops a direct-manipulation-of-free-form-deformation-based method and application for pre-surgical planning. The developed user interface provides a friendly interactive tool in the plastic surgery. Nose augmentation surgery is presented as an example. Displacement vector and lattices resulting in different resolution are used to obtain various deformation results. During the deformation, the volume change of the model is also considered based on a simplified skin-muscle model

    Uses of uncalibrated images to enrich 3D models information

    Get PDF
    The decrease in costs of semi-professional digital cameras has led to the possibility for everyone to acquire a very detailed description of a scene in a very short time. Unfortunately, the interpretation of the images is usually quite hard, due to the amount of data and the lack of robust and generic image analysis methods. Nevertheless, if a geometric description of the depicted scene is available, it gets much easier to extract information from 2D data. This information can be used to enrich the quality of the 3D data in several ways. In this thesis, several uses of sets of unregistered images for the enrichment of 3D models are shown. In particular, two possible fields of application are presented: the color acquisition, projection and visualization and the geometry modification. Regarding color management, several practical and cheap solutions to overcome the main issues in this field are presented. Moreover, some real applications, mainly related to Cultural Heritage, show that provided methods are robust and effective. In the context of geometry modification, two approaches are presented to modify already existing 3D models. In the first one, information extracted from images is used to deform a dummy model to obtain accurate 3D head models, used for simulation in the context of three-dimensional audio rendering. The second approach presents a method to fill holes in 3D models, with the use of registered images depicting a pattern projected on the real object. Finally, some useful indications about the possible future work in all the presented fields are given, in order to delineate the developments of this promising direction of research

    Reconstructing head models from photographs for individualized 3D-audio processing

    Get PDF
    International audienceVisual fidelity and interactivity are the main goals in Computer Graphics research, but recently also audio is assuming an important role. Binaural rendering can provide extremely pleasing and realistic three-dimensional sound, but to achieve best results it's necessary either to measure or to estimate individual Head Related Transfer Function (HRTF). This function is strictly related to the peculiar features of ears and face of the listener. Recent sound scattering simulation techniques can calculate HRTF starting from an accurate 3D model of a human head. Hence, the use of binaural rendering on large scale (i.e. video games, entertainment) could depend on the possibility to produce a sufficiently accurate 3D model of a human head, starting from the smallest possible input. In this paper we present a completely automatic system, which produces a 3D model of a head starting from simple input data (five photos and some key-points indicated by user). The geometry is generated by extracting information from images and accordingly deforming a 3D dummy to reproduce user head features. The system proves to be fast, automatic, robust and reliable: geometric validation and preliminary assessments show that it can be accurate enough for HRTF calculation

    An interactive framework for component-based morphing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Image Based View Synthesis

    Get PDF
    This dissertation deals with the image-based approach to synthesize a virtual scene using sparse images or a video sequence without the use of 3D models. In our scenario, a real dynamic or static scene is captured by a set of un-calibrated images from different viewpoints. After automatically recovering the geometric transformations between these images, a series of photo-realistic virtual views can be rendered and a virtual environment covered by these several static cameras can be synthesized. This image-based approach has applications in object recognition, object transfer, video synthesis and video compression. In this dissertation, I have contributed to several sub-problems related to image based view synthesis. Before image-based view synthesis can be performed, images need to be segmented into individual objects. Assuming that a scene can approximately be described by multiple planar regions, I have developed a robust and novel approach to automatically extract a set of affine or projective transformations induced by these regions, correctly detect the occlusion pixels over multiple consecutive frames, and accurately segment the scene into several motion layers. First, a number of seed regions using correspondences in two frames are determined, and the seed regions are expanded and outliers are rejected employing the graph cuts method integrated with level set representation. Next, these initial regions are merged into several initial layers according to the motion similarity. Third, the occlusion order constraints on multiple frames are explored, which guarantee that the occlusion area increases with the temporal order in a short period and effectively maintains segmentation consistency over multiple consecutive frames. Then the correct layer segmentation is obtained by using a graph cuts algorithm, and the occlusions between the overlapping layers are explicitly determined. Several experimental results are demonstrated to show that our approach is effective and robust. Recovering the geometrical transformations among images of a scene is a prerequisite step for image-based view synthesis. I have developed a wide baseline matching algorithm to identify the correspondences between two un-calibrated images, and to further determine the geometric relationship between images, such as epipolar geometry or projective transformation. In our approach, a set of salient features, edge-corners, are detected to provide robust and consistent matching primitives. Then, based on the Singular Value Decomposition (SVD) of an affine matrix, we effectively quantize the search space into two independent subspaces for rotation angle and scaling factor, and then we use a two-stage affine matching algorithm to obtain robust matches between these two frames. The experimental results on a number of wide baseline images strongly demonstrate that our matching method outperforms the state-of-art algorithms even under the significant camera motion, illumination variation, occlusion, and self-similarity. Given the wide baseline matches among images I have developed a novel method for Dynamic view morphing. Dynamic view morphing deals with the scenes containing moving objects in presence of camera motion. The objects can be rigid or non-rigid, each of them can move in any orientation or direction. The proposed method can generate a series of continuous and physically accurate intermediate views from only two reference images without any knowledge about 3D. The procedure consists of three steps: segmentation, morphing and post-warping. Given a boundary connection constraint, the source and target scenes are segmented into several layers for morphing. Based on the decomposition of affine transformation between corresponding points, we uniquely determine a physically correct path for post-warping by the least distortion method. I have successfully generalized the dynamic scene synthesis problem from the simple scene with only rotation to the dynamic scene containing non-rigid objects. My method can handle dynamic rigid or non-rigid objects, including complicated objects such as humans. Finally, I have also developed a novel algorithm for tri-view morphing. This is an efficient image-based method to navigate a scene based on only three wide-baseline un-calibrated images without the explicit use of a 3D model. After automatically recovering corresponding points between each pair of images using our wide baseline matching method, an accurate trifocal plane is extracted from the trifocal tensor implied in these three images. Next, employing a trinocular-stereo algorithm and barycentric blending technique, we generate an arbitrary novel view to navigate the scene in a 2D space. Furthermore, after self-calibration of the cameras, a 3D model can also be correctly augmented into this virtual environment synthesized by the tri-view morphing algorithm. We have applied our view morphing framework to several interesting applications: 4D video synthesis, automatic target recognition, multi-view morphing

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    2D Photo Converter: Modeling 3D Objects from 2D Photos Using OpenGL

    Get PDF
    The concept of modeling a 3D object based on 2D photos has indeed been widely discussed and researched among the computer vision professionals and virtual reality technologists. However, regardless of the many researches going on and the rapid technological development in 3D modeling world, the best method to render a 3D model that satisfies the requirement of minimum image pre-processing, maximum model-realism and minimum error percentage is yet to be studied. This report will lay out another technique of modeling 3D objects using a 2D photo by analyzing the possibility and accuracy of light intensity evaluation towards the model. The main objective of this study is to propose an alternative solution to 3D modeling techniques by using the information from a 2D photo. It is hoped that by applying the proposed solution, the constraint of costs and time in the current 3D modeling system could be reduced. The research focuses on bitmap photos and it applies the principles of light intensity and distance relativity in estimating the depth volume of the model. The application is built by using Microsoft Visual C++ 6.0 and utilizes OpenGL Application Programming Interface (API) in its code. However, the results of the experiments conducted in this research study shows that the formula used in the application might not be the best method to produce a 3D model from a 2D photo. Nonetheless, the idea of using light intensity valuations in producing 3D models could be the new solution in 3D modeling technology. The framework design and the ideas could be the base research for further development in the 3D modeling research and analysis study
    corecore