2,498 research outputs found
Slice and Dice: A Physicalization Workflow for Anatomical Edutainment
During the last decades, anatomy has become an interesting topic in
education---even for laymen or schoolchildren. As medical imaging techniques
become increasingly sophisticated, virtual anatomical education applications
have emerged. Still, anatomical models are often preferred, as they facilitate
3D localization of anatomical structures. Recently, data physicalizations
(i.e., physical visualizations) have proven to be effective and
engaging---sometimes, even more than their virtual counterparts. So far,
medical data physicalizations involve mainly 3D printing, which is still
expensive and cumbersome. We investigate alternative forms of physicalizations,
which use readily available technologies (home printers) and inexpensive
materials (paper or semi-transparent films) to generate crafts for anatomical
edutainment. To the best of our knowledge, this is the first computer-generated
crafting approach within an anatomical edutainment context. Our approach
follows a cost-effective, simple, and easy-to-employ workflow, resulting in
assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily
supports volumetric data (such as CT or MRI), but mesh data can also be
imported. An octree slices the imported volume and an optimization step
simplifies the slice configuration, proposing the optimal order for easy
assembly. A packing algorithm places the resulting slices with their labels,
annotations, and assembly instructions on a paper or transparent film of
user-selected size, to be printed, assembled into a sliceform, and explored. We
conducted two user studies to assess our approach, demonstrating that it is an
initial positive step towards the successful creation of interactive and
engaging anatomical physicalizations
SmartCanvas: Context-inferred Interpretation of Sketches for Preparatory Design Studies
In early or preparatory design stages, an architect or designer sketches out rough ideas, not only about the object or structure being considered, but its relation to its spatial context. This is an iterative process, where the sketches are not only the primary means for testing and refining ideas, but also for communicating among a design team and to clients. Hence, sketching is the preferred media for artists and designers during the early stages of design, albeit with a major drawback: sketches are 2D and effects such as view perturbations or object movement are not supported, thereby inhibiting the design process. We present an interactive system that allows for the creation of a 3D abstraction of a designed space, built primarily by sketching in 2D within the context of an anchoring design or photograph. The system is progressive in the sense that the interpretations are refined as the user continues sketching. As a key technical enabler, we reformulate the sketch interpretation process as a selection optimization from a set of context-generated canvas planes in order to retrieve a regular arrangement of planes. We demonstrate our system (available at http:/geometry.cs.ucl.ac.uk/projects/2016/smartcanvas/) with a wide range of sketches and design studies
Automatic tailoring and cloth modelling for animation characters.
The construction of realistic characters has become increasingly important to the production of blockbuster films, TV series and computer games. The outfit of character plays an important role in the application of virtual characters. It is one of the key elements reflects the personality of character. Virtual clothing refers to the process that constructs outfits for virtual characters, and currently, it is widely used in mainly two areas, fashion industry and computer animation. In fashion industry, virtual clothing technology is an effective tool which creates, edits and pre-visualises cloth design patterns efficiently. However, using this method requires lots of tailoring expertises. In computer animation, geometric modelling methods are widely used for cloth modelling due to their simplicity and intuitiveness. However, because of the shortage of tailoring knowledge among animation artists, current existing cloth design patterns can not be used directly by animation artists, and the appearance of cloth depends heavily on the skill of artists. Moreover, geometric modelling methods requires lots of manual operations. This tediousness is worsen by modelling same style cloth for different characters with different body shapes and proportions. This thesis addresses this problem and presents a new virtual clothing method which includes automatic character measuring, automatic cloth pattern adjustment, and cloth patterns assembling. There are two main contributions in this research. Firstly, a geodesic curvature flow based geodesic computation scheme is presented for acquiring length measurements from character. Due to the fast growing demand on usage of high resolution character model in animation production, the increasing number of characters need to be handled simultaneously as well as improving the reusability of 3D model in film production, the efficiency of modelling cloth for multiple high resolution character is very important. In order to improve the efficiency of measuring character for cloth fitting, a fast geodesic algorithm that has linear time complexity with a small bounded error is also presented. Secondly, a cloth pattern adjusting genetic algorithm is developed for automatic cloth fitting and retargeting. For the reason that that body shapes and proportions vary largely in character design, fitting and transferring cloth to a different character is a challenging task. This thesis considers the cloth fitting process as an optimization procedure. It optimizes both the shape and size of each cloth pattern automatically, the integrity, design and size of each cloth pattern are evaluated in order to create 3D cloth for any character with different body shapes and proportions while preserve the original cloth design. By automating the cloth modelling process, it empowers the creativity of animation artists and improves their productivity by allowing them to use a large amount of existing cloth design patterns in fashion industry to create various clothes and to transfer same design cloth to characters with different body shapes and proportions with ease
Recommended from our members
Sliceforms: Deployable structures from interlocking slices
A sliceform is a volumetric, honeycomb-like structure assembled from an array of cross-sectional planar slices that are interlocked via pairs of complementary slots placed along each intersection. If the slices are thin, these slotted intersections function as revolute joints, and the sliceform is foldable if the geometry of the embedded spatial linkage permits it, for example a lattice sliceform (LS) is bi-directionally flat-foldable. This thesis concerns a study of such sliceforms toward the design of novel deployable structures.
A sliceform torus, composed of two sets of inclined slices arranged at regular intervals about a central axis of symmetry, has been discovered to exhibit a surprising and intriguing folding action whereby its incomplete form can be collapsed to a flat-folded stack of coplanar slices. On deployment, the assembly expands smoothly about an arc until the slices have rotated to their design inclination, then, without reaching any apparent physical limit, abruptly ‘locks out’. With a full complement of slices, the outermost intersections can be interlocked to complete and rigidify the ring. The torus is an example of a rotational sliceform (RS), and analysis of these structures proceeds by noting that their structural geometry comprises an array of pyramidal cells that is commensurate to a spherical scissor grid. The conditions for flat-foldability are determined by examination of the intrinsic geometry of each cell; the incompatibility of the slices with apparent rigid-folding revealed by assessment of the extrinsic motion of the slices. Investigation of their compliant kinematics reveals the articulation to be a bistable transition admitted by small transverse deflections of the slices.
This structural form is generalised by development of a technique for generating sliceforms along a smooth spatial curve – curve sliceforms (CS). Their synthesis is more involved than for an RS, but a range of sliceform ‘tubes’ are generated and manufactured. Each example retains the flat-foldable, deployable characteristic of an RS, despite the apparent intrinsic rigidity of each constituent skew cell. Examination of the small-scale models indicates that deployable motion is achieved via imperfect action of the slots, and a simple model of the articulation of a single cell is constructed to investigate how this proceeds, verifying that motion is kinematically admissible via local deformations
Doctor of Philosophy in Computer Science
dissertationRay tracing is becoming more widely adopted in offline rendering systems due to its natural support for high quality lighting. Since quality is also a concern in most real time systems, we believe ray tracing would be a welcome change in the real time world, but is avoided due to insufficient performance. Since power consumption is one of the primary factors limiting the increase of processor performance, it must be addressed as a foremost concern in any future ray tracing system designs. This will require cooperating advances in both algorithms and architecture. In this dissertation I study ray tracing system designs from a data movement perspective, targeting the various memory resources that are the primary consumer of power on a modern processor. The result is high performance, low energy ray tracing architectures
RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars
Synthesizing high-fidelity head avatars is a central problem for computer
vision and graphics. While head avatar synthesis algorithms have advanced
rapidly, the best ones still face great obstacles in real-world scenarios. One
of the vital causes is inadequate datasets -- 1) current public datasets can
only support researchers to explore high-fidelity head avatars in one or two
task directions; 2) these datasets usually contain digital head assets with
limited data volume, and narrow distribution over different attributes. In this
paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive
advance in head avatar research. It contains massive data assets, with 243+
million complete head frames, and over 800k video sequences from 500 different
identities captured by synchronized multi-view cameras at 30 FPS. It is a
large-scale digital library for head avatars with three key attributes: 1) High
Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K
cameras in 360 degrees. 2) High Diversity: The collected subjects vary from
different ages, eras, ethnicities, and cultures, providing abundant materials
with distinctive styles in appearance and geometry. Moreover, each subject is
asked to perform various motions, such as expressions and head rotations, which
further extend the richness of assets. 3) Rich Annotations: we provide
annotations with different granularities: cameras' parameters, matting, scan,
2D/3D facial landmarks, FLAME fitting, and text description.
Based on the dataset, we build a comprehensive benchmark for head avatar
research, with 16 state-of-the-art methods performed on five main tasks: novel
view synthesis, novel expression synthesis, hair rendering, hair editing, and
talking head generation. Our experiments uncover the strengths and weaknesses
of current methods. RenderMe-360 opens the door for future exploration in head
avatars.Comment: Technical Report; Project Page: 36; Github Link:
https://github.com/RenderMe-360/RenderMe-36
2D to 3D non photo realistic character transformation and morphing (computer animation)
This research concerns the transformation and morphing between a full body 2D and 3D animated character. This practice based research will examine both technical and aesthetic techniques for enhancing morphing of animated characters. Stylized character transformations from A to B and from B to A, where details like facial expression, body motion, texture are to be expressively transformed aesthetically in a narrated story.
Currently it is hard to separate 2D and 3D animation in a mix media usage. If we analyse and breakdown these graphical components, we could actually find a distinction as to how these 2D and 3D element increase the information level and complexity of storytelling. However, if we analyse it from character animation perspective, instance transformation of a digital character from 2D to 3D is not possible without post production techniques, pre-define 3D information such as blend shape or complex geometry data and mathematic calculation.
There are mainly two elements to this investigation. The primary element is the design system of such stylizes character in 2D and 3D. Currently many design systems (morphing software) are based on photo realistic artifacts such as Fanta Morph, Morph Buster, Morpheus, Fun Morph and etc. This investigation will focus on non photo realistic character morphing. In seeking to define the targeted non photo realistic, illustrated stylize 2D and 3D character, I am examining the advantages and disadvantages of a number of 2D illustrated characters in respect to 3D morphing. This investigation could also help to analyse the efficiency and limitation of such 2D and 3D non photo realistic character design and transformation where broader techniques will be explored.
The secondary element is the theoretical investigation by relating how such artistic and technical morphing idea is being used in past and today films/games. In a narrated story contain character that acts upon a starting question or situation and reacts on the event. The gap between his aim and the result of his acting, the gap between his vision and his personality creates the dramatic tension. I intend to distinguish the possibility of identifying a transitional process of voice between narrator and morphing character, while also illustrating, through visual terminology, the varying fluctuations between two speaking agents. I intend to prove and insert sample demonstrating “morphing” is not just visually important but have direct impact on storytelling
- …