62 research outputs found
Non-smooth developable geometry for interactively animating paper crumpling
International audienceWe present the first method to animate sheets of paper at interactive rates, while automatically generating a plausible set of sharp features when the sheet is crumpled. The key idea is to interleave standard physically-based simulation steps with procedural generation of a piecewise continuous developable surface. The resulting hybrid surface model captures new singular points dynamically appearing during the crumpling process, mimicking the effect of paper fiber fracture. Although the model evolves over time to take these irreversible damages into account, the mesh used for simulation is kept coarse throughout the animation, leading to efficient computations. Meanwhile, the geometric layer ensures that the surface stays almost isometric to its original 2D pattern. We validate our model through measurements and visual comparison with real paper manipulation, and show results on a variety of crumpled paper configurations
Recommended from our members
Sliceforms: Deployable structures from interlocking slices
A sliceform is a volumetric, honeycomb-like structure assembled from an array of cross-sectional planar slices that are interlocked via pairs of complementary slots placed along each intersection. If the slices are thin, these slotted intersections function as revolute joints, and the sliceform is foldable if the geometry of the embedded spatial linkage permits it, for example a lattice sliceform (LS) is bi-directionally flat-foldable. This thesis concerns a study of such sliceforms toward the design of novel deployable structures.
A sliceform torus, composed of two sets of inclined slices arranged at regular intervals about a central axis of symmetry, has been discovered to exhibit a surprising and intriguing folding action whereby its incomplete form can be collapsed to a flat-folded stack of coplanar slices. On deployment, the assembly expands smoothly about an arc until the slices have rotated to their design inclination, then, without reaching any apparent physical limit, abruptly ‘locks out’. With a full complement of slices, the outermost intersections can be interlocked to complete and rigidify the ring. The torus is an example of a rotational sliceform (RS), and analysis of these structures proceeds by noting that their structural geometry comprises an array of pyramidal cells that is commensurate to a spherical scissor grid. The conditions for flat-foldability are determined by examination of the intrinsic geometry of each cell; the incompatibility of the slices with apparent rigid-folding revealed by assessment of the extrinsic motion of the slices. Investigation of their compliant kinematics reveals the articulation to be a bistable transition admitted by small transverse deflections of the slices.
This structural form is generalised by development of a technique for generating sliceforms along a smooth spatial curve – curve sliceforms (CS). Their synthesis is more involved than for an RS, but a range of sliceform ‘tubes’ are generated and manufactured. Each example retains the flat-foldable, deployable characteristic of an RS, despite the apparent intrinsic rigidity of each constituent skew cell. Examination of the small-scale models indicates that deployable motion is achieved via imperfect action of the slots, and a simple model of the articulation of a single cell is constructed to investigate how this proceeds, verifying that motion is kinematically admissible via local deformations
Kinematics, Structural Mechanics, and Design of Origami Structures with Smooth Folds
Origami provides novel approaches to the fabrication, assembly, and functionality of engineering structures in various fields such as aerospace, robotics, etc. With the increase in complexity of the geometry and materials for origami structures that provide engineering utility, computational models and design methods for such structures have become essential. Currently available models and design methods for origami structures are generally limited to the idealization of the folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures having non-negligible thickness or maximum curvature at the folds restricted by material limitations. Thus, for general structures, creased folds of merely zeroth-order geometric continuity are not appropriate representations of structural response and a new approach is needed. The first contribution of this dissertation is a model for the kinematics of origami structures having realistic folds of non-zero surface area and exhibiting higher-order geometric continuity, here termed smooth folds. The geometry of the smooth folds and the constraints on their associated kinematic variables are presented. A numerical implementation of the model allowing for kinematic simulation of structures having arbitrary fold patterns is also described. Examples illustrating the capability of the model to capture realistic structural folding response are provided. Subsequently, a method for solving the origami design problem of determining the geometry of a single planar sheet and its pattern of smooth folds that morphs into a given three-dimensional goal shape, discretized as a polygonal mesh, is presented. The design parameterization of the planar sheet and the constraints that allow for a valid pattern of smooth folds and approximation of the goal shape in a known folded configuration are presented. Various testing examples considering goal shapes of diverse geometries are provided. Afterwards, a model for the structural mechanics of origami continuum bodies with smooth folds is presented. Such a model entails the integration of the presented kinematic model and existing plate theories in order to obtain a structural representation for folds having non-zero thickness and comprised of arbitrary materials. The model is validated against finite element analysis. The last contribution addresses the design and analysis of active material-based self-folding structures that morph via simultaneous folding towards a given three-dimensional goal shape starting from a planar configuration. Implementation examples including shape memory alloy (SMA)-based self-folding structures are provided
KINE[SIS]TEM'17 From Nature to Architectural Matter
Kine[SiS]tem – From Kinesis + System. Kinesis is a non-linear movement or activity of an organism in response to a stimulus. A system is a set of interacting and interdependent agents forming a complex whole, delineated by its spatial and temporal boundaries, influenced by its environment.
How can architectural systems moderate the external environment to enhance comfort conditions in a simple, sustainable and smart way?
This is the starting question for the Kine[SiS]tem’17 – From Nature to Architectural Matter International Conference. For decades, architectural design was developed despite (and not with) the climate, based on mechanical heating and cooling. Today, the argument for net zero energy buildings needs very effective strategies to reduce energy requirements. The challenge ahead requires design processes that are built upon consolidated knowledge, make use of advanced technologies and are inspired by nature. These design processes should lead to responsive smart systems that deliver the best performance in each specific design scenario.
To control solar radiation is one key factor in low-energy thermal comfort. Computational-controlled sensor-based kinetic surfaces are one of the possible answers to control solar energy in an effective way, within the scope of contradictory objectives throughout the year.FC
From nanometers to centimeters: Imaging across spatial scales with smart computer-aided microscopy
Microscopes have been an invaluable tool throughout the history of the life sciences, as they allow researchers to observe the miniscule details of living systems in space and time. However, modern biology studies complex and non-obvious phenotypes and their distributions in populations and thus requires that microscopes evolve from visual aids for anecdotal observation into instruments for objective and quantitative measurements. To this end, many cutting-edge developments in microscopy are fuelled by innovations in the computational processing of the generated images. Computational tools can be applied in the early stages of an experiment, where they allow for reconstruction of images with higher resolution and contrast or more colors compared to raw data. In the final analysis stage, state-of-the-art image analysis pipelines seek to extract interpretable and humanly tractable information from the high-dimensional space of images.
In the work presented in this thesis, I performed super-resolution microscopy and wrote image analysis pipelines to derive quantitative information about multiple biological processes. I contributed to studies on the regulation of DNMT1 by implementing machine learning-based segmentation of replication sites in images and performed quantitative statistical analysis of the recruitment of multiple DNMT1 mutants. To study the spatiotemporal distribution of DNA damage response I performed STED microscopy and could provide a lower bound on the size of the elementary spatial units of DNA repair. In this project, I also wrote image analysis pipelines and performed statistical analysis to show a decoupling of DNA density and heterochromatin marks during repair. More on the experimental side, I helped in the establishment of a protocol for many-fold color multiplexing by iterative labelling of diverse structures via DNA hybridization. Turning from small scale details to the distribution of phenotypes in a population, I wrote a reusable pipeline for fitting models of cell cycle stage distribution and inhibition curves to high-throughput measurements to quickly quantify the effects of innovative antiproliferative antibody-drug-conjugates.
The main focus of the thesis is BigStitcher, a tool for the management and alignment of terabyte-sized image datasets. Such enormous datasets are nowadays generated routinely with light-sheet microscopy and sample preparation techniques such as clearing or expansion. Their sheer size, high dimensionality and unique optical properties poses a serious bottleneck for researchers and requires specialized processing tools, as the images often do not fit into the main memory of most computers. BigStitcher primarily allows for fast registration of such many-dimensional datasets on conventional hardware using optimized multi-resolution alignment algorithms. The software can also correct a variety of aberrations such as fixed-pattern noise, chromatic shifts and even complex sample-induced distortions. A defining feature of BigStitcher, as well as the various image analysis scripts developed in this work is their interactivity. A central goal was to leverage the user's expertise at key moments and bring innovations from the big data world to the lab with its smaller and much more diverse datasets without replacing scientists with automated black-box pipelines. To this end, BigStitcher was implemented as a user-friendly plug-in for the open source image processing platform Fiji and provides the users with a nearly instantaneous preview of the aligned images and opportunities for manual control of all processing steps. With its powerful features and ease-of-use, BigStitcher paves the way to the routine application of light-sheet microscopy and other methods producing equally large datasets
From nanometers to centimeters: Imaging across spatial scales with smart computer-aided microscopy
Microscopes have been an invaluable tool throughout the history of the life sciences, as they allow researchers to observe the miniscule details of living systems in space and time. However, modern biology studies complex and non-obvious phenotypes and their distributions in populations and thus requires that microscopes evolve from visual aids for anecdotal observation into instruments for objective and quantitative measurements. To this end, many cutting-edge developments in microscopy are fuelled by innovations in the computational processing of the generated images. Computational tools can be applied in the early stages of an experiment, where they allow for reconstruction of images with higher resolution and contrast or more colors compared to raw data. In the final analysis stage, state-of-the-art image analysis pipelines seek to extract interpretable and humanly tractable information from the high-dimensional space of images.
In the work presented in this thesis, I performed super-resolution microscopy and wrote image analysis pipelines to derive quantitative information about multiple biological processes. I contributed to studies on the regulation of DNMT1 by implementing machine learning-based segmentation of replication sites in images and performed quantitative statistical analysis of the recruitment of multiple DNMT1 mutants. To study the spatiotemporal distribution of DNA damage response I performed STED microscopy and could provide a lower bound on the size of the elementary spatial units of DNA repair. In this project, I also wrote image analysis pipelines and performed statistical analysis to show a decoupling of DNA density and heterochromatin marks during repair. More on the experimental side, I helped in the establishment of a protocol for many-fold color multiplexing by iterative labelling of diverse structures via DNA hybridization. Turning from small scale details to the distribution of phenotypes in a population, I wrote a reusable pipeline for fitting models of cell cycle stage distribution and inhibition curves to high-throughput measurements to quickly quantify the effects of innovative antiproliferative antibody-drug-conjugates.
The main focus of the thesis is BigStitcher, a tool for the management and alignment of terabyte-sized image datasets. Such enormous datasets are nowadays generated routinely with light-sheet microscopy and sample preparation techniques such as clearing or expansion. Their sheer size, high dimensionality and unique optical properties poses a serious bottleneck for researchers and requires specialized processing tools, as the images often do not fit into the main memory of most computers. BigStitcher primarily allows for fast registration of such many-dimensional datasets on conventional hardware using optimized multi-resolution alignment algorithms. The software can also correct a variety of aberrations such as fixed-pattern noise, chromatic shifts and even complex sample-induced distortions. A defining feature of BigStitcher, as well as the various image analysis scripts developed in this work is their interactivity. A central goal was to leverage the user's expertise at key moments and bring innovations from the big data world to the lab with its smaller and much more diverse datasets without replacing scientists with automated black-box pipelines. To this end, BigStitcher was implemented as a user-friendly plug-in for the open source image processing platform Fiji and provides the users with a nearly instantaneous preview of the aligned images and opportunities for manual control of all processing steps. With its powerful features and ease-of-use, BigStitcher paves the way to the routine application of light-sheet microscopy and other methods producing equally large datasets
Example Based Caricature Synthesis
The likeness of a caricature to the original face image is an essential and often overlooked part of caricature
production. In this paper we present an example based caricature synthesis technique, consisting of shape
exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set
of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial
features. The relationship exaggeration step introduces two definitions which facilitate global facial feature
synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an
intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion
form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance
(MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a
number of constraints. The effectiveness of our algorithm is demonstrated with experimental results
Computation and material practice in architecture: intersecting intention and execution during design development
 It is generally believed that computation and computer numerical control (CNC) manufacturing technologies empower architects by enabling better integrated architectural design to production processes. While this is a tantalizing prospect, there is no clear strategy in place for achieving this goal. Furthermore, the extent to which design, engineering and construction might be integrated around digital technologies is currently limited as the computational processes architects use for design exploration are not typically informed by material logic and the logistics of materialisation. My research explores whether computation and CNC manufacturing can support more informed design methods and better integrated production processes in architecture. I identify the critical factors involved in pursuing this goal and elaborate on an integral computational methodology capable of enhancing the bond between designing and making in architecture. My hypothesis is that digitally mediated design and manufacturing can strengthen the relationship between intention and execution by enabling closer engagement with fabrication during early design exploration, and by supporting more informed decision making via dynamic design representations with embedded material intelligence. This hypothesis has been developed and tested through project led research. Although different in nature, the three investigations I have undertaken serve as complimentary vehicles of discovery and evidence for my claims. Each investigation was devised and carried out in response to practical observations, a critical review of literature focu¬sing on historical and contemporary relationships between design and construction, and a series of precedent studies related to materially informed design computing. As a group they contribute to understanding how digital technologies might be employed by architects to enhance and expand design to production processes, and shed light on some of the technical, cultural and philosophical implications of a deeper engagement with materials and processes of making within the discipline of architecture. My research concludes that new kinds of interactive simulation and evaluation tools, and access to digital fabrication technologies, enables an accelerated generation, evaluation and calibration process during early design exploration. This mutually informed digital-material feedback loop makes it possible to rapidly develop acute material intuition, and consequently to conceive new kinds of architectural systems and materialisation strategies which could lead to better use of available resources, more innovative design and a stronger bond between intent and outcome through more streamlined design to production processes. The digitally supported materially informed methodology that I outline encourages a shift in design process and attitude, away from a visually driven mode of architectural composition towards material practice - an approach in which the self-organising logic of materials and the logistics of materialisation are used to actively inform design exploration, refinement and construction processes. My project based outcomes, findings and observations prompt re-evaluation of the conventional distance between architects and processes of making by highlighting the importance of deep material engagement and broad practical knowledge when utilising computation and CNC manufacturing technologies for designing and producing architecture
2010 Creating/Making Forum
The 2010 Creating/Making Forum was held in conjunction with the Fred Jones Jr. Museum of Art’s “Bruce Goff: A Creative Mind” exhibition and featured peer-reviewed paper sessions titled: Design Education and Tacit Knowledge; Digital Creating and Making; Community Engagement; The Found Object; Innovation, Interdisciplinarity and the Environment; Interpreting Architecture; and History Reframed, as well as a juried poster session. Keynote speakers at the 2010 Forum were Sheila Kennedy, Craig Borum, and Marlon Blackwell.A special thanks to Angela M. Person for editing these proceedings.N
- …