1,831 research outputs found
Sketching-based Skeleton Extraction
Articulated character animation can be performed by manually creating and rigging a skeleton into an unfolded 3D mesh model. Such tasks are not trivial, as they require a substantial amount of training and practice. Although methods have been proposed to help automatic extraction of skeleton structure, they may not guarantee that the resulting skeleton can help to produce animations according to user manipulation. We present a sketching-based skeleton extraction method to create a user desired skeleton structure which is used in 3D model animation. This method takes user sketching as an input, and based on the mesh segmentation result of a 3D mesh model, generates a skeleton for articulated character animation.
In our system, we assume that a user will properly sketch bones by roughly following the mesh model structure. The user is expected to sketch independently on different regions of a mesh model for creating separate bones. For each sketched stroke, we project it into the mesh model so that it becomes the medial axis of its corresponding mesh model region from the current viewer perspective. We call this projected stroke a “sketched bone”. After pre-processing user sketched bones, we cluster them into groups. This process is critical as user sketching can be done from any orientation of a mesh model. To specify the topology feature for different mesh parts, a user can sketch strokes from different orientations of a mesh model, as there may be duplicate strokes from different orientations for the same mesh part. We need a clustering process to merge similar sketched bones into one bone, which we call a “reference bone”. The clustering process is based on three criteria: orientation, overlapping and locality.
Given the reference bones as the input, we adopt a mesh segmentation process to assist our skeleton extraction method. To be specific, we apply the reference bones and the seed triangles to segment the input mesh model into meaningful segments using a multiple-region growing mechanism. The seed triangles, which are collected from the reference bones, are used as the initial seeds in the mesh segmentation process. We have designed a new segmentation metric [1] to form a better segmentation criterion. Then we compute the Level Set Diagrams (LSDs) on each mesh part to extract bones and joints. To construct the final skeleton, we connect bones extracted from all mesh parts together into a single structure.
There are three major steps involved: optimizing and smoothing bones, generating joints and forming the skeleton structure. After constructing the skeleton model, we have proposed a new method, which utilizes the Linear Blend Skinning (LBS) technique and the Laplacian mesh deformation technique together to perform skeleton-driven animation. Traditional LBS techniques may have self-intersection problem in regions around segmentation boundaries. Laplacian mesh deformation can preserve the local surface details, which can eliminate the self-intersection problem. In this case, we make use of LBS result as the positional constraint to perform a Laplacian mesh deformation. By using the Laplacian mesh deformation method, we maintain the surface details in segmentation boundary regions.
This thesis outlines a novel approach to construct a 3D skeleton model interactively, which can also be used in 3D animation and 3D model matching area. The work is motivated by the observation that either most of the existing automatic skeleton extraction methods lack well-positioned joints specification or the manually generated methods require too much professional training to create a good skeleton structure. We dedicate a novel approach to create 3D model skeleton based on user sketching which specifies articulated skeleton with joints. The experimental results show that our method can produce better skeletons in terms of joint positions and topological structure
Recommended from our members
Style-driven Shape Analysis and Synthesis
In this dissertation I will investigate algorithms that analyze stylistic properties of 3D shapes and automatically synthesize shapes given style specifications. I will start by introducing a structure-transcending method for style similarity evaluation between 3D shapes. Inspired by observations about style similarity in art history literature, we propose an algorithmically computed style similarity measure which identifies style related elements on the analyzed models and collates element-level geometric similarity measurements into an object-level style measure consistent with human perception. To achieve this consistency we employ crowdsourcing to learn the relative perceptual importance of a range of elementary shape distances and other parameters used in our measurement from participant answers to cross-structure style similarity queries. I will then describe an algorithm that utilizes this learned style similarity measure to synthesize 3D models of man-made shapes. The algorithm combines user-specified style, described via an exemplar shape, and functionality, encoded by a functionally different target shape. We transfer the exemplar style to the target via a sequence of compatible element-level operations where the compatibility is a learned metric that estimates the impact of each operation on the edited shape. We use this metric to cast style transfer as a tabu search, which incrementally updates the target shape using compatible operations, progressively increasing its style similarity to the exemplar while strictly maintaining its functionality at each step. Finally I will propose a method for reconstructing 3D shapes following style aspects of given 2D drawings. Our method takes line drawings as input and converts them into surface depth and normal maps from several output viewpoints via a deep convolutional neural network with multi-view encoder-decoder architecture. The multi-view maps are then consolidated into a dense coherent 3D point cloud by solving an optimization problem that fuses depth and normal information across all output viewpoints. The output point cloud is then converted into a polygon mesh representation, which is further fine-tuned to match the input sketch more precisely
Methods for 3D Geometry Processing in the Cultural Heritage Domain
This thesis presents methods for 3D geometry processing under the aspects of cultural heritage applications. After a short overview over the relevant basics in 3D geometry processing, the present thesis investigates the digital acquisition of 3D models. A particular challenge in this context are on the one hand difficult surface or material properties of the model to be captured. On the other hand, the fully automatic reconstruction of models even with suitable surface properties that can be captured with Laser range scanners is not yet completely solved. This thesis presents two approaches to tackle these challenges. One exploits a thorough capture of the object’s appearance and a coarse reconstruction for a concise and realistic object representation even for objects with problematic surface properties like reflectivity and transparency. The other method concentrates on digitisation via Laser-range scanners and exploits 2D colour images that are typically recorded with the range images for a fully automatic registration technique. After reconstruction, the captured models are often still incomplete, exhibit holes and/or regions of insufficient sampling. In addition to that, holes are often deliberately introduced into a registered model to remove some undesired or defective surface part. In order to produce a visually appealing model, for instance for visualisation purposes, for prototype or replica production, these holes have to be detected and filled. Although completion is a well-established research field in 2D image processing and many approaches do exist for image completion, surface completion in 3D is a fairly new field of research. This thesis presents a hierarchical completion approach that employs and extends successful exemplar-based 2D image processing approaches to 3D and fills in detail-equipped surface patches into missing surface regions. In order to identify and construct suitable surface patches, selfsimilarity and coherence properties of the surface context of the hole are exploited. In addition to the reconstruction and repair, the present thesis also investigates methods for a modification of captured models via interactive modelling. In this context, modelling is regarded as a creative process, for instance for animation purposes. On the other hand, it is also demonstrated how this creative process can be used to introduce human expertise into the otherwise automatic completion process. This way, reconstructions are feasible even of objects where already the data source, the object itself, is incomplete due to corrosion, demolition, or decay.Methoden zur 3D-Geometrieverarbeitung im Kulturerbesektor In dieser Arbeit werden Methoden zur Bearbeitung von digitaler 3D-Geometrie unter besonderer Berücksichtigung des Anwendungsbereichs im Kulturerbesektor vorgestellt. Nach einem kurzen Überblick über die relevanten Grundlagen der dreidimensionalen Geometriebehandlung wird zunächst die digitale Akquise von dreidimensionalen Objekten untersucht. Eine besondere Herausforderung stellen bei der Erfassung einerseits ungünstige Oberflächen- oder Materialeigenschaften der Objekte dar (wie z.B. Reflexivität oder Transparenz), andererseits ist auch die vollautomatische Rekonstruktion von solchen Modellen, die sich verhältnismäßig problemlos mit Laser-Range Scannern erfassen lassen, immer noch nicht vollständig gelöst. Daher bilden zwei neuartige Verfahren, die diesen Herausforderungen begegnen, den Anfang. Auch nach der Registrierung sind die erfassten Datensätze in vielen Fällen unvollständig, weisen Löcher oder nicht ausreichend abgetastete Regionen auf. Darüber hinaus werden in vielen Anwendungen auch, z.B. durch Entfernen unerwünschter Oberflächenregionen, Löcher gewollt hinzugefügt. Für eine optisch ansprechende Rekonstruktion, vor allem zu Visualisierungszwecken, im Bildungs- oder Unterhaltungssektor oder zur Prototyp- und Replik-Erzeugung müssen diese Löcher zunächst automatisch detektiert und anschließend geschlossen werden. Obwohl dies im zweidimensionalen Fall der Bildbearbeitung bereits ein gut untersuchtes Forschungsfeld darstellt und vielfältige Ansätze zur automatischen Bildvervollständigung existieren, ist die Lage im dreidimensionalen Fall anders, und die Übertragung von zweidimensionalen Ansätzen in den 3D stellt vielfach eine große Herausforderung dar, die bislang keine zufriedenstellenden Lösungen erlaubt hat. Nichtsdestoweniger wird in dieser Arbeit ein hierarchisches Verfahren vorgestellt, das beispielbasierte Konzepte aus dem 2D aufgreift und Löcher in Oberflächen im 3D unter Ausnutzung von Selbstähnlichkeiten und Kohärenzeigenschaften des Oberflächenkontextes schließt. Um plausible Oberflächen zu erzeugen werden die Löcher dabei nicht nur glatt gefüllt, sondern auch feinere Details aus dem Kontext rekonstruiert. Abschließend untersucht die vorliegende Arbeit noch die Modifikation der vervollständigten Objekte durch Freiformmodellierung. Dies wird dabei zum einen als kreativer Prozess z.B. zu Animationszwecken betrachtet. Zum anderen wird aber auch untersucht, wie dieser kreative Prozess benutzt werden kann, um etwaig vorhandenes Expertenwissen in die ansonsten automatische Vervollständigung mit einfließen zu lassen. Auf diese Weise werden auch Rekonstruktionen ermöglicht von Objekten, bei denen schon die Datenquelle, also das Objekt selbst z.B. durch Korrosion oder mutwillige Zerstörung unvollständig ist
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
3D mesh metamorphosis from spherical parameterization for conceptual design
Engineering product design is an information intensive decision-making
process that consists of several phases including design specification
definition, design concepts generation, detailed design and analysis,
and manufacturing. Usually, generating geometry models for
visualization is a big challenge for early stage conceptual design.
Complexity of existing computer aided design packages constrains
participation of people with various backgrounds in the design
process. In addition, many design processes do not take advantage of
the rich amount of legacy information available for new concepts
creation.
The research presented here explores the use of advanced graphical
techniques to quickly and efficiently merge legacy information with
new design concepts to rapidly create new conceptual product designs.
3D mesh metamorphosis framework 3DMeshMorpher was created to
construct new models by navigating in a shape-space of registered
design models. The framework is composed of: i) a fast spherical
parameterization method to map a geometric model (genus-0) onto a unit
sphere; ii) a geometric feature identification and picking technique
based on 3D skeleton extraction; and iii) a LOD controllable 3D
remeshing scheme with spherical mesh subdivision based on the
developedspherical parameterization. This efficient software framework
enables designers to create numerous geometric concepts in real time
with a simple graphical user interface.
The spherical parameterization method is focused on closed genus-zero
meshes. It is based upon barycentric coordinates with convex boundary.
Unlike most existing similar approaches which deal with each vertex in
the mesh equally, the method developed in this research focuses
primarily on resolving overlapping areas, which helps speed the
parameterization process. The algorithm starts by normalizing the
source mesh onto a unit sphere and followed by some initial relaxation
via Gauss-Seidel iterations. Due to its emphasis on solving only
challenging overlapping regions, this parameterization process is much
faster than existing spherical mapping methods.
To ensure the correspondence of features from different models, we
introduce a skeleton based feature identification and picking method
for features alignment. Unlike traditional methods that align single
point for each feature, this method can provide alignments for
complete feature areas. This could help users to create more
reasonable intermediate morphing results with preserved topological
features. This skeleton featuring framework could potentially be
extended to automatic features alignment for geometries with similar
topologies. The skeleton extracted could also be applied for other
applications such as skeleton-based animations.
The 3D remeshing algorithm with spherical mesh subdivision is
developed to generate a common connectivity for different mesh models.
This method is derived from the concept of spherical mesh subdivision.
The local recursive subdivision can be set to match the desired LOD
(level of details) for source spherical mesh. Such LOD is controllable
and this allows various outputs with different resolutions. Such
recursive subdivision then follows by a triangular correction process
which ensures valid triangulations for the remeshing. And the final
mesh merging and reconstruction process produces the remeshing model
with desired LOD specified from user. Usually the final merged model
contains all the geometric details from each model with reasonable
amount of vertices, unlike other existing methods that result in big
amount of vertices in the merged model. Such multi-resolution outputs
with controllable LOD could also be applied in various other computer
graphics applications such as computer games
IDeS Method Applied to an Innovative Motorbike—Applying Topology Optimization and Augmented Reality
This study is on the conception of the DS700 HYBRID project by the application of the Industrial Design Structure method (IDeS), which applies different tools sourced from engineering and style departments, including QFD and SDE, used to create the concept of a hybrid motorbike that could reach the market in the near future. SDE is an engineering approach for the design and development of industrial design projects, and it finds important applications in the automotive sector. In addition, analysis tools such as QFD, comprising benchmarking and top-flop analysis are carried out to maximize the creative process. The key characteristics of the bike and the degree of innovation are identified and outlined, the market segment is identified, and the stylistic trends that are most suitable for a naked motorbike of the future are analyzed. In the second part the styling of each superstructure and of all the components of the vehicle is carried out. Afterwards the aesthetics and engineering perspectives are accounted for to complete the project. This is achieved with modelling and computing tools such as 3D CAD, visual renderings, and FEM simulations, and virtual prototyping thanks to augmented reality (AR), and finally physical prototyping with the use of additive manufacturing (AM). The result is a product conception able to compete in the present challenging market, with a design that is technically feasible and also reaches new lightness targets for efficiency
A new surface joining technique for the design of shoe lasts
The footwear industry is a traditional craft sector, where technological advances are difficult to implement owing to the complexity of the processes being carried out, and the level of precision demanded by most of them. The shoe last joining operation is one clear example, where two halves from different lasts are put together, following a specifically traditional process, to create a new one. Existing surface joining techniques analysed in this paper are not well adapted to shoe last design and production processes, which makes their implementation in the industry difficult. This paper presents an alternative surface joining technique, inspired by the traditional work of lastmakers. This way, lastmakers will be able to easily adapt to the new tool and make the most out of their know-how. The technique is based on the use of curve networks that are created on the surfaces to be joined, instead of using discrete data. Finally, a series of joining tests are presented, in which real lasts were successfully joined using a commercial last design software. The method has shown to be valid, efficient, and feasible within the sector
A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint
3D shape editing is widely used in a range of applications such as movie
production, computer games and computer aided design. It is also a popular
research topic in computer graphics and computer vision. In past decades,
researchers have developed a series of editing methods to make the editing
process faster, more robust, and more reliable. Traditionally, the deformed
shape is determined by the optimal transformation and weights for an energy
term. With increasing availability of 3D shapes on the Internet, data-driven
methods were proposed to improve the editing results. More recently as the deep
neural networks became popular, many deep learning based editing methods have
been developed in this field, which is naturally data-driven. We mainly survey
recent research works from the geometric viewpoint to those emerging neural
deformation techniques and categorize them into organic shape editing methods
and man-made model editing methods. Both traditional methods and recent neural
network based methods are reviewed
- …