55 research outputs found

    Self Assembly Problems of Anisotropic Particles in Soft Matter.

    Full text link
    Anisotropic building blocks assembled from colloidal particles are attractive building blocks for self-assembled materials because their complex interactions can be exploited to drive self-assembly. In this dissertation we address the self-assembly of anisotropic particles from multiple novel computational and mathematical angles. First, we accelerate algorithms for modeling systems of anisotropic particles via massively parallel GPUs. We provide a scheme for generating statistically robust pseudo-random numbers that enables GPU acceleration of Brownian and dissipative particle dynamics. We also show how rigid body integration can be accelerated on a GPU. Integrating these two algorithms into a GPU-accelerated molecular dynamics code (HOOMD-blue), make a single GPU the ideal computing environment for modeling the self-assembly of anisotropic nanoparticles. Second, we introduce a new mathematical optimization problem, filling, a hybrid of the familiar shape packing and covering problem, which can be used to model shaped particles. We study the rich mathematical structures of the solution space and provide computational methods for finding optimal solutions for polygons and convex polyhedra. We present a sequence of isosymmetric optimal filling solutions for the Platonic solids. We then consider the filling of a hyper-cone in dimensions two to eight and show the solution remains scale-invariant but dependent on dimension. Third, we study the impact of size variation, polydispersity, on the self-assembly of an anisotropic particle, the polymer-tethered nanosphere, into ordered phases. We show that the local nanoparticle packing motif, icosahedral or crystalline, determines the impact of polydispersity on energy of the system and phase transitions. We show how extensions of the Voronoi tessellation can be calculated and applied to characterize such micro-segregated phases. By applying a Voronoi tessellation, we show that properties of the individual domains can be studied as a function of system properties such as temperature and concentration. Last, we consider the thermodynamically driven self-assembly of terminal clusters of particles. We predict that clusters related to spherical codes, a mathematical sequence of points, can be synthesized via self-assembly. These anisotropic clusters can be tuned to different anisotropies via the ratio of sphere diameters and temperature. The method suggests a rich new way for assembling anisotropic building blocks.Ph.D.Applied Physics and Scientific ComputingUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91576/1/phillicl_1.pd

    The investigation of a method to generate conformal lattice structures for additive manufacturing

    Get PDF
    Additive manufacturing (AM) allows a geometric complexity in products not seen in conventional manufacturing. This geometric freedom facilitates the design and fabrication of conformal hierarchical structures. Entire parts or regions of a part can be populated with lattice structure, designed to exhibit properties that differ from the solid material used in fabrication. Current computer aided design (CAD) software used to design products is not suitable for the generation of lattice structure models. Although conceptually simple, the memory requirements to store a virtual CAD model of a lattice structure are prohibitively high. Conventional CAD software defines geometry through boundary representation (B-rep); shapes are described by the connectivity of faces, edges and vertices. While useful for representing accurate models of complex shape, the sheer quantity of individual surfaces required to represent each of the relatively simple individual struts that comprise a lattice structure ensure that memory limitations are soon reached. Additionally, the conventional data flow from CAD to manufactured part is arduous, involving several conversions between file formats. As well as a lengthy process, each conversion risks the generation of geometric errors that must be fixed before manufacture. A method was developed to specifically generate large arrays of lattice structures, based on a general voxel modelling method identified in the literature review. The method is much less sensitive to geometric complexity than conventional methods and thus facilitates the design of considerably more complex structures. The ability to grade structure designs across regions of a part (termed functional grading ) was also investigated, as well as a method to retain connectivity between boundary struts of a conformal structure. In addition, the method streamlines the data flow from design to manufacture: earlier steps of the data conversion process are bypassed entirely. The effect of the modelling method on surface roughness of parts produced was investigated, as voxel models define boundaries with discrete, stepped blocks. It was concluded that the effect of this stepping on surface roughness was minimal. This thesis concludes with suggestions for further work to improve the efficiency, capability and usability of the conformal structure method developed in this work

    Painterly interfaces for audiovisual performance

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2000.Includes bibliographical references (p. 145-149).This thesis presents a new computer interface metaphor for the real-time and simultaneous performance of dynamic imagery and sound. This metaphor is based on the idea of an inexhaustible, infinitely variable, time-based, audiovisual "substance" which can be gesturally created, deposited, manipulated and deleted in a free-form, non-diagrammatic image space. The interface metaphor is exemplified by five interactive audiovisual synthesis systems whose visual and aural dimensions are deeply plastic, commensurately malleable, and tightly connected by perceptually- motivated mappings. The principles, patterns and challenges which structured the design of these five software systems are extracted and discussed, after which the expressive capacities of the five systems are compared and evaluated.Golan Levin.S.M

    AutoGraff: towards a computational understanding of graffiti writing and related art forms.

    Get PDF
    The aim of this thesis is to develop a system that generates letters and pictures with a style that is immediately recognizable as graffiti art or calligraphy. The proposed system can be used similarly to, and in tight integration with, conventional computer-aided geometric design tools and can be used to generate synthetic graffiti content for urban environments in games and in movies, and to guide robotic or fabrication systems that can materialise the output of the system with physical drawing media. The thesis is divided into two main parts. The first part describes a set of stroke primitives, building blocks that can be combined to generate different designs that resemble graffiti or calligraphy. These primitives mimic the process typically used to design graffiti letters and exploit well known principles of motor control to model the way in which an artist moves when incrementally tracing stylised letter forms. The second part demonstrates how these stroke primitives can be automatically recovered from input geometry defined in vector form, such as the digitised traces of writing made by a user, or the glyph outlines in a font. This procedure converts the input geometry into a seed that can be transformed into a variety of calligraphic and graffiti stylisations, which depend on parametric variations of the strokes

    Applications of Mathematical Modelling in Oncolytic Virotherapy and Immunotherapy

    Get PDF
    Cancer is a devastating disease that touches almost everyone and finding effective treatments presents a highly complex problem, requiring extensive multidisciplinary research. Mathematical modelling can provide insight into both cancer formation and treatment. A range of techniques are developed in this thesis to investigate two promising therapies: oncolytic virotherapy, and combined oncolytic virotherapy and immunotherapy. Oncolytic virotherapy endeavours to eradicate cancer cells by exploiting the aptitude of virus-induced cell death. Building on this premise, combined oncolytic virotherapy and immunotherapy aims to harness and stimulate the immune system's inherent ability to recognise and destroy cancerous cells. Using deterministic and agent-based mathematical modelling, perturbations of treatment characteristics are investigated and optimal treatment protocols are suggested. An integro differential equation with distributed parameters is developed to characterise the function of the E1B genes in an oncolytic adenovirus. Subsequently, by using a bifurcation analysis of a coupled-system of ordinary differential equations for oncolytic virotherapy, regions of bistability are discovered, where increased injections can result in either tumour eradication or tumour stabilisation. Through an extensive hierarchical optimisation to multiple data sets, drawn from in vitro and in vivo modelling, gel-release of a combined oncolytic virotherapy and immunotherapy treatment is optimised. Additionally, using an agent-based modelling approach, delayed-infection of an intratumourally administered virus is shown to be able to reduce tumour burden. This thesis develops new mathematical models that can be applied to a range of cancer therapies and suggests engineered treatment designs that can significantly advance current therapies and improve treatments

    Intelligent Computational Transportation

    Get PDF
    Transportation is commonplace around our world. Numerous researchers dedicate great efforts to vast transportation research topics. The purpose of this dissertation is to investigate and address a couple of transportation problems with respect to geographic discretization, pavement surface automatic examination, and traffic ow simulation, using advanced computational technologies. Many applications require a discretized 2D geographic map such that local information can be accessed efficiently. For example, map matching, which aligns a sequence of observed positions to a real-world road network, needs to find all the nearby road segments to the individual positions. To this end, the map is discretized by cells and each cell retains a list of road segments coincident with this cell. An efficient method is proposed to form such lists for the cells without costly overlapping tests. Furthermore, the method can be easily extended to 3D scenarios for fast triangle mesh voxelization. Pavement surface distress conditions are critical inputs for quantifying roadway infrastructure serviceability. Existing computer-aided automatic examination techniques are mainly based on 2D image analysis or 3D georeferenced data set. The disadvantage of information losses or extremely high costs impedes their effectiveness iv and applicability. In this study, a cost-effective Kinect-based approach is proposed for 3D pavement surface reconstruction and cracking recognition. Various cracking measurements such as alligator cracking, traverse cracking, longitudinal cracking, etc., are identified and recognized for their severity examinations based on associated geometrical features. Smart transportation is one of the core components in modern urbanization processes. Under this context, the Connected Autonomous Vehicle (CAV) system presents a promising solution towards the enhanced traffic safety and mobility through state-of-the-art wireless communications and autonomous driving techniques. Due to the different nature between the CAVs and the conventional Human- Driven-Vehicles (HDVs), it is believed that CAV-enabled transportation systems will revolutionize the existing understanding of network-wide traffic operations and re-establish traffic ow theory. This study presents a new continuum dynamics model for the future CAV-enabled traffic system, realized by encapsulating mutually-coupled vehicle interactions using virtual internal and external forces. A Smoothed Particle Hydrodynamics (SPH)-based numerical simulation and an interactive traffic visualization framework are also developed

    Automated generation of geometrically-precise and semantically-informed virtual geographic environnements populated with spatially-reasoning agents

    Get PDF
    La Géo-Simulation Multi-Agent (GSMA) est un paradigme de modélisation et de simulation de phénomènes dynamiques dans une variété de domaines d'applications tels que le domaine du transport, le domaine des télécommunications, le domaine environnemental, etc. La GSMA est utilisée pour étudier et analyser des phénomènes qui mettent en jeu un grand nombre d'acteurs simulés (implémentés par des agents) qui évoluent et interagissent avec une représentation explicite de l'espace qu'on appelle Environnement Géographique Virtuel (EGV). Afin de pouvoir interagir avec son environnement géographique qui peut être dynamique, complexe et étendu (à grande échelle), un agent doit d'abord disposer d'une représentation détaillée de ce dernier. Les EGV classiques se limitent généralement à une représentation géométrique du monde réel laissant de côté les informations topologiques et sémantiques qui le caractérisent. Ceci a pour conséquence d'une part de produire des simulations multi-agents non plausibles, et, d'autre part, de réduire les capacités de raisonnement spatial des agents situés. La planification de chemin est un exemple typique de raisonnement spatial dont un agent pourrait avoir besoin dans une GSMA. Les approches classiques de planification de chemin se limitent à calculer un chemin qui lie deux positions situées dans l'espace et qui soit sans obstacle. Ces approches ne prennent pas en compte les caractéristiques de l'environnement (topologiques et sémantiques), ni celles des agents (types et capacités). Les agents situés ne possèdent donc pas de moyens leur permettant d'acquérir les connaissances nécessaires sur l'environnement virtuel pour pouvoir prendre une décision spatiale informée. Pour répondre à ces limites, nous proposons une nouvelle approche pour générer automatiquement des Environnements Géographiques Virtuels Informés (EGVI) en utilisant les données fournies par les Systèmes d'Information Géographique (SIG) enrichies par des informations sémantiques pour produire des GSMA précises et plus réalistes. De plus, nous présentons un algorithme de planification hiérarchique de chemin qui tire avantage de la description enrichie et optimisée de l'EGVI pour fournir aux agents un chemin qui tient compte à la fois des caractéristiques de leur environnement virtuel et de leurs types et capacités. Finalement, nous proposons une approche pour la gestion des connaissances sur l'environnement virtuel qui vise à supporter la prise de décision informée et le raisonnement spatial des agents situés

    Visualisation of Large-Scale Call-Centre Data

    Get PDF
    The contact centre industry employs 4% of the entire United King-dom and United States’ working population and generates gigabytes of operational data that require analysis, to provide insight and to improve efficiency. This thesis is the result of a collaboration with QPC Limited who provide data collection and analysis products for call centres. They provided a large data-set featuring almost 5 million calls to be analysed. This thesis utilises novel visualisation techniques to create tools for the exploration of the large, complex call centre data-set and to facilitate unique observations into the data.A survey of information visualisation books is presented, provid-ing a thorough background of the field. Following this, a feature-rich application that visualises large call centre data sets using scatterplots that support millions of points is presented. The application utilises both the CPU and GPU acceleration for processing and filtering and is exhibited with millions of call events.This is expanded upon with the use of glyphs to depict agent behaviour in a call centre. A technique is developed to cluster over-lapping glyphs into a single parent glyph dependant on zoom level and a customizable distance metric. This hierarchical glyph repre-sents the mean value of all child agent glyphs, removing overlap and reducing visual clutter. A novel technique for visualising individually tailored glyphs using a Graphics Processing Unit is also presented, and demonstrated rendering over 100,000 glyphs at interactive frame rates. An open-source code example is provided for reproducibility.Finally, a novel interaction and layout method is introduced for improving the scalability of chord diagrams to visualise call transfers. An exploration of sketch-based methods for showing multiple links and direction is made, and a sketch-based brushing technique for filtering is proposed. Feedback from domain experts in the call centre industry is reported for all applications developed

    Processing mesh animations: from static to dynamic geometry and back

    Get PDF
    Static triangle meshes are the representation of choice for artificial objects, as well as for digital replicas of real objects. They have proven themselves to be a solid foundation for further processing. Although triangle meshes are handy in general, it may seem that their discrete approximation of reality is a downside. But in fact, the opposite is true. The approximation of the real object's shape remains the same, even if we willfully change the vertex positions in the mesh, which allows us to optimize it in this way. Due to modern acquisition methods, such a step is always beneficial, often even required, prior to further processing of the acquired triangle mesh. Therefore, we present a general framework for optimizing surface meshes with respect to various target criteria. Because of the simplicity and efficiency of the setup it can be adapted to a variety of applications. Although this framework was initially designed for single static meshes, the application to a set of meshes is straightforward. For example, we convert a set of meshes into compatible ones and use them as basis for creating dynamic geometry. Consequently, we propose an interpolation method which is able to produce visually plausible interpolation results, even if the compatible input meshes differ by large rotations. The method can be applied to any number of input vertex configurations and due to the utilization of a hierarchical scheme, the approach is fast and can be used for very large meshes. Furthermore, we consider the opposite direction. Given an animation sequence, we propose a pre-processing algorithm that considerably reduces the number of meshes required to describe the sequence, thus yielding a compact representation. Our method is based on a clustering and classification approach, which can be utilized to automatically find the most prominent meshes of the sequence. The original meshes can then be expressed as linear combinations of these few representative meshes with only small approximation errors. Finally, we investigate the shape space spanned by those few meshes and show how to apply different interpolation schemes to create other shape spaces, which are not based on vertex coordinates. We conclude with a careful analysis of these shape spaces and their usability for a compact representation of an animation sequence

    Simulating The Impact of Emissions Control on Economic Productivity Using Particle Systems and Puff Dispersion Model

    Get PDF
    A simulation platform is developed for quantifying the change in productivity of an economy under passive and active emission control mechanisms. The program uses object-oriented programming to code a collection of objects resembling typical stakeholders in an economy. These objects include firms, markets, transportation hubs, and boids which are distributed over a 2D surface. Firms are connected using a modified Prim’s Minimum spanning tree algorithm, followed by implementation of an all-pair shortest path Floyd Warshall algorithm for navigation purposes. Firms use a non-linear production function for transformation of land, labor, and capital inputs to finished product. A GA-Vehicle Routing Problem with multiple pickups and drop-offs is implemented for efficient delivery of commodities across multiple nodes in the economy. Boids are autonomous agents which perform several functions in the economy including labor, consumption, renting, saving, and investing. Each boid is programmed with several microeconomic functions including intertemporal choice models, Hicksian and Marshallian demand function, and labor-leisure model. The simulation uses a Puff Dispersion model to simulate the advection and diffusion of emissions from point and mobile sources in the economy. A dose-response function is implemented to quantify depreciation of a Boid’s health upon contact with these emissions. The impact of emissions control on productivity and air quality is examined through a series of passive and active emission control scenarios. Passive control examines the impact of various shutdown times on economic productivity and rate of emissions exposure experienced by boids. The active control strategy examines the effects of acceptable levels of emissions exposure on economic productivity. The key findings on 7 different scenarios of passive and active emissions controls indicate that rate of productivity and consumption in an economy declines with increased scrutiny of emissions from point sources. In terms of exposure rates, the point sources may not be the primary source of average exposure rates, however they significantly impact the maximum exposure rate experienced by a boid. Tightening of emissions control also negatively impacts the transportation sector by reducing the asset utilization rate as well as reducing the total volume of goods transported across the economy
    • …
    corecore