6,586 research outputs found
Recommended from our members
Use of 3D body motion to freeform surface design
This paper presents a novel surface modelling approach by utilising a 3D motion capture system. For designing a large-sized surface, a network of splines is initially set up. Artists or designers wearing motion markers on their hands can then change shapes of the splines with their hands. Literarily they can move their bodies freely to any positions to perform their tasks. They can also move their hands in 3D free space to detail surface characteristics by their gestures. All their design motions are recorded in the motion capturing system and transferred into 3D curves and surfaces correspondingly. This paper reports this novel surface design method and some case studies
Compact Bilinear Pooling
Bilinear models has been shown to achieve impressive performance on a wide
range of visual tasks, such as semantic segmentation, fine grained recognition
and face recognition. However, bilinear features are high dimensional,
typically on the order of hundreds of thousands to a few million, which makes
them impractical for subsequent analysis. We propose two compact bilinear
representations with the same discriminative power as the full bilinear
representation but with only a few thousand dimensions. Our compact
representations allow back-propagation of classification errors enabling an
end-to-end optimization of the visual recognition system. The compact bilinear
representations are derived through a novel kernelized analysis of bilinear
pooling which provide insights into the discriminative power of bilinear
pooling, and a platform for further research in compact pooling methods.
Experimentation illustrate the utility of the proposed representations for
image classification and few-shot learning across several datasets.Comment: Camera ready version for CVP
3D Dynamic Scene Reconstruction from Multi-View Image Sequences
A confirmation report outlining my PhD research plan is presented. The PhD research topic is 3D dynamic scene reconstruction from multiple view image sequences. Chapter 1 describes the motivation and research aims. An overview of the progress in the past year is included. Chapter 2 is a review of volumetric scene reconstruction techniques and Chapter 3 is an in-depth description of my proposed reconstruction method. The theory behind the proposed volumetric scene reconstruction method is also presented, including topics in projective geometry, camera calibration and energy minimization. Chapter 4 presents the research plan and outlines the future work planned for the next two years
HSM: A New Color Space used in the Processing of Color Images
Inspired on the techniques used by painters to overlap layers of various hues of paint to create oil paintings, and also on observations of the the arrangement of Short-(S), Middle-(M), and Long-(L) wavelength-sensitive cones of the human retina for the interpretation of the colors, this paper proposes the use of the new color space called HSM to the processing of color images. To demonstrate the applicability of the HSM color space in the processing of color images, this paper proposes the pixelbased segmentation of a digital image of “human skin” or “non-skin”, the sketch of the face image and the pixel-based segmentation of the trumpet flowers tree (ype). The performance of the HSM color space in the pixel-based segmentation is compared with the HSV, YCbCr and TSL color spaces while the sketch of the face image is also compared with HSV, YCbCr and TSL colors spaces and the edge detectors of the Sobel, Prewitt, Roberts, Canny and Laplacian of Gaussian methods. The results demonstrate the potential of the proposed color space
3D Model Assisted Image Segmentation
The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for proces
Calipso: Physics-based Image and Video Editing through CAD Model Proxies
We present Calipso, an interactive method for editing images and videos in a
physically-coherent manner. Our main idea is to realize physics-based
manipulations by running a full physics simulation on proxy geometries given by
non-rigidly aligned CAD models. Running these simulations allows us to apply
new, unseen forces to move or deform selected objects, change physical
parameters such as mass or elasticity, or even add entire new objects that
interact with the rest of the underlying scene. In Calipso, the user makes
edits directly in 3D; these edits are processed by the simulation and then
transfered to the target 2D content using shape-to-image correspondences in a
photo-realistic rendering process. To align the CAD models, we introduce an
efficient CAD-to-image alignment procedure that jointly minimizes for rigid and
non-rigid alignment while preserving the high-level structure of the input
shape. Moreover, the user can choose to exploit image flow to estimate scene
motion, producing coherent physical behavior with ambient dynamics. We
demonstrate Calipso's physics-based editing on a wide range of examples
producing myriad physical behavior while preserving geometric and visual
consistency.Comment: 11 page
Accelerated High-Resolution Photoacoustic Tomography via Compressed Sensing
Current 3D photoacoustic tomography (PAT) systems offer either high image
quality or high frame rates but are not able to deliver high spatial and
temporal resolution simultaneously, which limits their ability to image dynamic
processes in living tissue. A particular example is the planar Fabry-Perot (FP)
scanner, which yields high-resolution images but takes several minutes to
sequentially map the photoacoustic field on the sensor plane, point-by-point.
However, as the spatio-temporal complexity of many absorbing tissue structures
is rather low, the data recorded in such a conventional, regularly sampled
fashion is often highly redundant. We demonstrate that combining variational
image reconstruction methods using spatial sparsity constraints with the
development of novel PAT acquisition systems capable of sub-sampling the
acoustic wave field can dramatically increase the acquisition speed while
maintaining a good spatial resolution: First, we describe and model two general
spatial sub-sampling schemes. Then, we discuss how to implement them using the
FP scanner and demonstrate the potential of these novel compressed sensing PAT
devices through simulated data from a realistic numerical phantom and through
measured data from a dynamic experimental phantom as well as from in-vivo
experiments. Our results show that images with good spatial resolution and
contrast can be obtained from highly sub-sampled PAT data if variational image
reconstruction methods that describe the tissues structures with suitable
sparsity-constraints are used. In particular, we examine the use of total
variation regularization enhanced by Bregman iterations. These novel
reconstruction strategies offer new opportunities to dramatically increase the
acquisition speed of PAT scanners that employ point-by-point sequential
scanning as well as reducing the channel count of parallelized schemes that use
detector arrays.Comment: submitted to "Physics in Medicine and Biology
Recommended from our members
An investigation on the framework of dressing virtual humans
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Realistic human models are widely used in variety of applications. Much research has been carried out on improving realism of virtual humans from various aspects, such as body shapes, hair, and facial expressions and so on. In most occasions, these virtual humans need to wear garments. However, it is time-consuming and tedious to dress a human model using current software packages [Maya2004]. Several methods for dressing virtual humans have been proposed recently [Bourguignon2001, Turquin2004, Turquin2007 and Wang2003B]. The method proposed by Bourguignon et al [Bourguignon2001] can only generate 3D garment contour instead of 3D surface. The method presented by Turquin et al. [Turquin2004, Turquin2007] could generate various kinds of garments from sketches but their garments followed the shape of the body and the side of a garment looked not convincing because of using simple linear interpolation. The method proposed by Wang et al. [Wang2003B] lacked interactivity from users, so users had very limited control on the garment shape.This thesis proposes a framework for dressing virtual humans to obtain convincing dressing results, which overcomes problems existing in previous papers mentioned above by using nonlinear interpolation, level set-based shape modification, feature constraints and so on. Human models used in this thesis are reconstructed from real human body data obtained using a body scanning system. Semantic information is then extracted from human models to assist in generation of 3 dimensional (3D) garments. The proposed framework allows users to dress virtual humans using garment patterns and sketches. The proposed dressing method is based on semantic virtual humans. A semantic human model is a human body with semantic information represented by certain of structure and body features. The semantic human body is reconstructed from body scanned data from a real human body. After segmenting the human model into six parts some key features are extracted. These key features are used as constraints for garment construction.Simple 3D garment patterns are generated using the techniques of sweep and offset. To dress a virtual human, users just choose a garment pattern, which is put on the human body at the default position with a default size automatically. Users are allowed to change simple parameters to specify some sizes of a garment by sketching the desired position on the human body.To enable users to dress virtual humans by their own design styles in an intuitive way, this thesis proposes an approach for garment generation from user-drawn sketches. Users can directly draw sketches around reconstructed human bodies and then generates 3D garments based on user-drawn strokes. Some techniques for generating 3D garments and dressing virtual humans are proposed. The specific focus of the research lies in generation of 3D geometric garments, garment shape modification, local shape modification, garment surface processing and decoration creation. A sketch-based interface has been developed allowing users to draw garment contour representing the front-view shape of a garment, and the system can generate a 3D geometric garment surface accordingly. To improve realism of a garment surface, this thesis presents three methods as follows. Firstly, the procedure of garment vertices generation takes key body features as constraints. Secondly, an optimisation algorithm is carried out after generation of garment vertices to optimise positions of garment vertices. Finally, some mesh processing schemes are applied to further process the garment surface. Then, an elaborate 3D geometric garment surface can be obtained through this series of processing. Finally, this thesis proposes some modification and editing methods. The user-drawn sketches are processed into spline curves, which allow users to modify the existing garment shape by dragging the control points into desired positions. This makes it easy for users to obtain a more satisfactory garment shape compared with the existing one. Three decoration tools including a 3D pen, a brush and an embroidery tool, are provided letting users decorate the garment surface by adding some small 3D details such as brand names, symbols and so on. The prototype of the framework is developed using Microsoft Visual Studio C++,OpenGL and GPU programming
- …