43 research outputs found
Efficient sketch-based 3D character modelling.
Sketch-based modelling (SBM) has undergone substantial research over the past two decades. In the early days, researchers aimed at developing techniques useful for modelling of architectural and mechanical models through sketching. With the advancement of technology used in designing visual effects for film, TV and games, the demand for highly realistic 3D character models has skyrocketed. To allow artists to create 3D character models quickly, researchers have proposed several techniques for efficient character modelling from sketched feature curves. Moreover several research groups have developed 3D shape databases to retrieve 3D models from sketched inputs. Unfortunately, the current state of the art in sketch-based organic modelling (3D character modelling) contains a lot of gaps and limitations. To bridge the gaps and improve the current sketch-based modelling techniques, this research aims to develop an approach allowing direct and interactive modelling of 3D characters from sketched feature curves, and also make use of 3D shape databases to guide the artist to create his / her desired models. The research involved finding a fusion between 3D shape retrieval, shape manipulation, and shape reconstruction / generation techniques backed by an extensive literature review, experimentation and results. The outcome of this research involved devising a novel and improved technique for sketch-based modelling, the creation of a software interface that allows the artist to quickly and easily create realistic 3D character models with comparatively less effort and learning. The proposed research work provides the tools to draw 3D shape primitives and manipulate them using simple gestures which leads to a better modelling experience than the existing state of the art SBM systems
A new method for generic three dimensional human face modelling for emotional bio-robots
Existing 3D human face modelling methods are confronted with difficulties in
applying flexible control over all facial features and generating a great number of
different face models. The gap between the existing methods and the requirements of
emotional bio-robots applications urges the creation of a generic 3D human face
model. This thesis focuses on proposing and developing two new methods involved
in the research of emotional bio-robots: face detection in complex background
images based on skin colour model and establishment of a generic 3D human face
model based on NURBS. The contributions of this thesis are:
A new skin colour based face detection method has been proposed and
developed. The new method consists of skin colour model for skin regions
detection and geometric rules for distinguishing faces from detected regions. By
comparing to other previous methods, the new method achieved better results of
detection rate of 86.15% and detection speed of 0.4-1.2 seconds without any
training datasets.
A generic 3D human face modelling method is proposed and developed. This
generic parametric face model has the abilities of flexible control over all facial
features and generating various face models for different applications. It includes:
The segmentation of a human face of 21 surface features. These surfaces have
34 boundary curves. This feature-based segmentation enables the independent
manipulation of different geometrical regions of human face.
The NURBS curve face model and NURBS surface face model. These two
models are built up based on cubic NURBS reverse computation. The
elements of the curve model and surface model can be manipulated to change
the appearances of the models by their parameters which are obtained by
NURBS reverse computation.
A new 3D human face modelling method has been proposed and implemented
based on bi-cubic NURBS through analysing the characteristic features and
boundary conditions of NURBS techniques. This model can be manipulated
through control points on the NURBS facial features to build any specific
face models for any kind of appearances and to simulate dynamic facial
expressions for various applications such as emotional bio-robots, aesthetic
surgery, films and games, and crime investigation and prevention, etc
Recommended from our members
Modelling and Animation using Partial Differential Equations. Geometric modelling and computer animation of virtual characters using elliptic partial differential equations.
This work addresses various applications pertaining to the design, modelling and animation of parametric surfaces using elliptic Partial Differential Equations (PDE) which are produced via the PDE method. Compared with traditional surface generation techniques, the PDE method is an effective technique that can represent complex three-dimensional (3D) geometries in terms of a relatively small set of parameters. A PDE-based surface can be produced from a set of pre-configured curves that are used as the boundary conditions to solve a number of PDE. An important advantage of using this method is that most of the information required to define a surface is contained at its boundary. Thus, complex surfaces can be computed using only a small set of design parameters.
In order to exploit the advantages of this methodology various applications were developed that vary from the interactive design of aircraft configurations to the animation of facial expressions in a computer-human interaction system that utilizes an artificial intelligence (AI) bot for real time conversation. Additional applications of generating cyclic motions for PDE based human character integrated in a Computer-Aided Design (CAD) package as well as developing techniques to describe a given mesh geometry by a set of boundary conditions, required to evaluate the PDE method, are presented. Each methodology presents a novel approach for interacting with parametric surfaces obtained by the PDE method. This is due to the several advantages this surface generation technique has to offer. Additionally, each application developed in this thesis focuses on a specific target that delivers efficiently various operations in the design, modelling and animation of such surfaces.The project files will not be available online
Characterization of multiphase flows integrating X-ray imaging and virtual reality
Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution.
This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS.
While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner
A robust framework for medical image segmentation through adaptable class-specific representation
Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section
segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section
Recommended from our members
Hand gesture recognition using deep learning neural networks
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHuman Computer Interaction (HCI) is a broad field involving different types of interactions including gestures. Gesture recognition concerns non-verbal motions used as a means of communication in HCI. A system may be utilised to identify human gestures to convey information for device control. This represents a significant field within HCI involving device interfaces and users. The aim of gesture recognition is to record gestures that are formed in a certain way and then detected by a device such as a camera. Hand gestures can be used as a form of communication for many different applications. It may be used by people who possess different disabilities, including those with hearing-impairments, speech impairments and stroke patients, to communicate and fulfil their basic needs.
Various studies have previously been conducted relating to hand gestures. Some studies proposed different techniques to implement the hand gesture experiments. For image processing there are multiple tools to extract features of images, as well as Artificial Intelligence which has varied classifiers to classify different types of data. 2D and 3D hand gestures request an effective algorithm to extract images and classify various mini gestures and movements. This research discusses this issue using different algorithms. To detect 2D or 3D hand gestures, this research proposed image processing tools such as Wavelet Transforms and Empirical Mode Decomposition to extract image features. The Artificial Neural Network (ANN) classifier which used to train and classify data besides Convolutional Neural Networks (CNN). These methods were examined in terms of multiple parameters such as execution time, accuracy, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood, negative likelihood, receiver operating characteristic, area under ROC curve and root mean square. This research discusses four original contributions in the field of hand gestures. The first contribution is an implementation of two experiments using 2D hand gesture video where ten different gestures are detected in short and long distances using an iPhone 6 Plus with 4K resolution. The experiments are performed using WT and EMD for feature extraction while ANN and CNN for classification. The second contribution comprises 3D hand gesture video experiments where twelve gestures are recorded using holoscopic imaging system camera. The third contribution pertains experimental work carried out to detect seven common hand gestures. Finally, disparity experiments were performed using the left and the right 3D hand gesture videos to discover disparities. The results of comparison show the accuracy results of CNN being 100% compared to other techniques. CNN is clearly the most appropriate method to be used in a hand gesture system.Imam Abdulrahman bin Faisal Universit