82 research outputs found

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Wholetoning: Synthesizing Abstract Black-and-White Illustrations

    Get PDF
    Black-and-white imagery is a popular and interesting depiction technique in the visual arts, in which varying tints and shades of a single colour are used. Within the realm of black-and-white images, there is a set of black-and-white illustrations that only depict salient features by ignoring details, and reduce colour to pure black and white, with no intermediate tones. These illustrations hold tremendous potential to enrich decoration, human communication and entertainment. Producing abstract black-and-white illustrations by hand relies on a time consuming and difficult process that requires both artistic talent and technical expertise. Previous work has not explored this style of illustration in much depth, and simple approaches such as thresholding are insufficient for stylization and artistic control. I use the word wholetoning to refer to illustrations that feature a high degree of shape and tone abstraction. In this thesis, I explore computer algorithms for generating wholetoned illustrations. First, I offer a general-purpose framework, “artistic thresholding”, to control the generation of wholetoned illustrations in an intuitive way. The basic artistic thresholding algorithm is an optimization framework based on simulated annealing to get the final bi-level result. I design an extensible objective function from our observations of a lot of wholetoned images. The objective function is a weighted sum over terms that encode features common to wholetoned illustrations. Based on the framework, I then explore two specific wholetoned styles: papercutting and representational calligraphy. I define a paper-cut design as a wholetoned image with connectivity constraints that ensure that it can be cut out from only one piece of paper. My computer generated papercutting technique can convert an original wholetoned image into a paper-cut design. It can also synthesize stylized and geometric patterns often found in traditional designs. Representational calligraphy is defined as a wholetoned image with the constraint that all depiction elements must be letters. The procedure of generating representational calligraphy designs is formalized as a “calligraphic packing” problem. I provide a semi-automatic technique that can warp a sequence of letters to fit a shape while preserving their readability

    Faculty Publications & Presentations, 2005-2006

    Get PDF

    The VHP-F Computational Phantom and its Applications for Electromagnetic Simulations

    Get PDF
    Modeling of the electromagnetic, structural, thermal, or acoustic response of the human body to various external and internal stimuli is limited by the availability of anatomically accurate and numerically efficient computational models. The models currently approved for use are generally of proprietary or fixed format, preventing new model construction or customization. 1. This dissertation develops a new Visible Human Project - Female (VHP-F) computational phantom, constructed via segmentation of anatomical cryosection images taken in the axial plane of the human body. Its unique property is superior resolution on human head. In its current form, the VHP-F model contains 33 separate objects describing a variety of human tissues within the head and torso. Each obejct is a non-intersecting 2-manifold model composed of contiguous surface triangular elements making the VHP-F model compatible with major commercial and academic numerical simulators employing the Finite Element Method (FEM), Boundary Element Method (BEM), Finite Volume Method (FVM), and Finite-Difference Time-Domain (FDTD) Method. 2. This dissertation develops a new workflow used to construct the VHP-F model that may be utilized to build accessible custom models from any medical image data source. The workflow is customizable and flexible, enabling the creation of standard and parametrically varying models facilitating research on impacts associated with fluctuation of body characteristics (for example, skin thickness) and dynamic processes such as fluid pulsation. 3. This dissertation identifies, enables, and quantifies three new specific computational bioelectromagnetic problems, each of which is solved with the help of the developed VHP-F model: I. Transcranial Direct Current Stimulation (tDCS) of human brain motor cortex with extracephalic versus cephalic electrodes; II. RF channel characterization within cerebral cortex with novel small on-body directional antennas; III. Body Area Network (BAN) characterization and RF localization within the human body using the FDTD method and small antenna models with coincident phase centers. Each of those problems has been (or will be) the subject of a separate dedicated MS thesis

    CIRA annual report FY 2017/2018

    Get PDF
    Reporting period April 1, 2017-March 31, 2018

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
    • 

    corecore