13 research outputs found
Computer-aided design for multilayer microfluidic chips
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (leaves 63-66).Microfluidic chips fabricated by multilayer soft lithography are emerging as "lab-on-a-chip" systems that can automate biological experiments. As we are able to build more complex microfluidic chips with thousands of components, it becomes possible to build devices which can be programmatically changed to solve multiple problems. However, the current design methodology does not scale. In this thesis, we introduce design automation techniques to multilayer soft lithography microfluidics. Our work focuses on automating the design of the control layer. We present a method to define an Instruction Set Architecture as a hierarchical composition of flows. From this specification, we automatically infer and generate the logic and signals to control the chip. To complete the design automation of the control layer, we suggest a routing algorithm to connect control channels to peripheral I/O ports. To the microfluidic community, we offer a free computer-aided design tool, Micado, which implements our ideas for automation in a practical plug-in to AutoCAD. We have evaluated our work on real chips and our tool has been used successfully by microfluidic designers.by Nada Amin.M.Eng
Fluigi: an end-to-end software workflow for microfluidic design
One goal of synthetic biology is to design and build genetic circuits in living cells for a range of applications with implications in health, materials, and sensing. Computational design methodologies allow for increased performance and reliability of these circuits. Major challenges that remain include increasing the scalability and robustness of engineered biological systems and streamlining and automating the synthetic biology workflow of “specify-design-build-test.”
I summarize the advances in microfluidic technology, particularly microfluidic large scale integration, that can be used to address the challenges facing each step of the synthetic biology workflow for genetic circuits. Microfluidic technologies allow precise control over the flow of biological content within microscale devices, and thus may provide more reliable and scalable construction of synthetic biological systems. However, adoption of microfluidics for synthetic biology has been slow due to the expert knowledge and equipment needed to fabricate and control devices. I present an end-to-end workflow for a
computer-aided-design (CAD) tool, Fluigi, for designing microfluidic devices and for integrating biological Boolean genetic circuits with microfluidics. The workflow starts with a ``netlist" input describing the connectivity of microfluidic device to be designed, and proceeds through placement, routing, and design rule checking in a process analogous to electronic computer aided design (CAD). The output is an image of the device for printing as a mask for photolithography or for computer numerical control (CNC) machining. I also introduced a second workflow to allocate biological circuits to microfluidic devices and to generate the valve control scheme to enable biological computation on the device.
I used the CAD workflow to generate 15 designs including gradient generators, rotary pumps, and devices for housing biological circuits. I fabricated two designs, a gradient generator with CNC machining and a device for computing a biological XOR function with multilayer soft lithography, and verified their functions with dye. My efforts here show a first end-to-end demonstration of an extensible and foundational microfluidic CAD tool from design concept to fabricated device. This work provides a platform that when completed will automatically synthesize high level functional and performance specifications into fully realized microfluidic hardware, control software, and synthetic biological wetware
Design automation in synthetic biology : a dual evolutionary strategy
PhD ThesisSynthetic biology o ers a new horizon in designing complex systems. However,
unprecedented complexity hinders the development of biological systems to its full
potential. Mitigating complexity via adopting design principles from engineering
and computer science elds has resulted in some success. For example, modularisation to foster reuse of design elements, and using computer assisted design tools
have helped contain complexity to an extent. Nevertheless, these design practices
are still limited, due to their heavy dependence on rational decision making by
human designers. The issue with rational design approaches here arises from the
challenging nature of dealing with highly complex biological systems of which we
currently do not have complete understanding. Systematic processes that can algorithmically nd design solutions would be better able to cope with uncertainties
posed by high levels of design complexity. A new framework for enabling design automation in synthetic biology was investigated. The framework works by
projecting design problems into search problems, and by searching for design solutions based on the dual-evolutionary approach to combine the respective power of
design domains in vivo and in silico. Proof-of-concept ideas, software, and hardware were developed to exemplify key technologies necessary in realising the dual
evolutionary approach. Some of the areas investigated as part of this research included single-cell-level micro uidics, programmatic data collection, processing and
analysis, molecular devices supporting solution search in vivo, and mathematical
modelling. These somewhat eclectic collection of research themes were shown to
work together to provide necessary means with which to design and characterise
biological systems in a systematic fashion
Recommended from our members
Towards a Theory of Droplet-Mixing Graphs in Microfluidics
In this work, we study the problem of fluid mixing in microfluidic chips. The motivation for studying this problem comes from the process of sample preparation for chemical, biological, medical and environmental experiments, which often require preparation of fluid mixtures with desired concentrations. We assume that fluids are manipulated in discrete units called droplets. The input set of droplets consist of two distinct fluids: the reactant, which is the fluid of interest, and the buffer fluid that is used to dilute it. The goal is to produce a target set of droplets with prespecified reactant concentrations. In the model we study, the mixing process in a microfluidic chip can be abstractly represented as a mixing graph. A mixing graph is a collection of micro-mixers (nodes) connected by micro-channels (edges) that converts an input set of droplets I into a set of output droplets T, by applying a sequence of 1:1 mixing operations. This graph may also produce some waste, which are superfluous droplets of fluid not used in the target set. Computational complexity of most natural questions regarding such mixing graphs remain open. For example, it is not even known whether it is decidable for a given target set to be produced without waste. Current work in the literature contains only heuristic approaches that compute mixing graphs while attempting to optimize certain objectives, including minimizing waste, reactant usage, the depth of the graphs, and more.Our first contribution is an efficient algorithm for computing mixing graphs for single-droplet targets. Our algorithm produces significantly less waste than state-of-the-art algorithms in an experimental comparison. We also provide a bound on its worst-case performance that is significantly better than those for earlier algorithms. Our second result concerns the variant of the problem where the objective is to design a mixing graph that perfectly mixes a collection of input droplets with arbitrary concentrations. We provide a complete characterization of input sets for which such graphs exist, and an efficient algorithm to construct these graphs. In addition, we provide several other results about properties of mixing graphs and the computational complexity of computing mixing graphs of fixed depth
Aceleración de algoritmos de procesamiento de imágenes para el análisis de partículas individuales con microscopia electrónica
Tesis Doctoral inédita cotutelada por la Masaryk University (República Checa) y la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 24-10-2022Cryogenic Electron Microscopy (Cryo-EM) is a vital field in current structural biology. Unlike X-ray
crystallography and Nuclear Magnetic Resonance, it can be used to analyze membrane proteins and
other samples with overlapping spectral peaks. However, one of the significant limitations of Cryo-EM
is the computational complexity. Modern electron microscopes can produce terabytes of data per single
session, from which hundreds of thousands of particles must be extracted and processed to obtain a
near-atomic resolution of the original sample. Many existing software solutions use high-Performance
Computing (HPC) techniques to bring these computations to the realm of practical usability. The
common approach to acceleration is parallelization of the processing, but in praxis, we face many
complications, such as problem decomposition, data distribution, load scheduling, balancing, and
synchronization. Utilization of various accelerators further complicates the situation, as heterogeneous
hardware brings additional caveats, for example, limited portability, under-utilization due to synchronization,
and sub-optimal code performance due to missing specialization.
This dissertation, structured as a compendium of articles, aims to improve the algorithms used
in Cryo-EM, esp. the SPA (Single Particle Analysis). We focus on the single-node performance
optimizations, using the techniques either available or developed in the HPC field, such as heterogeneous
computing or autotuning, which potentially needs the formulation of novel algorithms. The
secondary goal of the dissertation is to identify the limitations of state-of-the-art HPC techniques. Since
the Cryo-EM pipeline consists of multiple distinct steps targetting different types of data, there is no
single bottleneck to be solved. As such, the presented articles show a holistic approach to performance
optimization.
First, we give details on the GPU acceleration of the specific programs. The achieved speedup is
due to the higher performance of the GPU, adjustments of the original algorithm to it, and application
of the novel algorithms. More specifically, we provide implementation details of programs for movie
alignment, 2D classification, and 3D reconstruction that have been sped up by order of magnitude
compared to their original multi-CPU implementation or sufficiently the be used on-the-fly. In addition
to these three programs, multiple other programs from an actively used, open-source software package
XMIPP have been accelerated and improved.
Second, we discuss our contribution to HPC in the form of autotuning. Autotuning is the ability of
software to adapt to a changing environment, i.e., input or executing hardware. Towards that goal, we
present cuFFTAdvisor, a tool that proposes and, through autotuning, finds the best configuration of the
cuFFT library for given constraints of input size and plan settings. We also introduce a benchmark set
of ten autotunable kernels for important computational problems implemented in OpenCL or CUDA,
together with the introduction of complex dynamic autotuning to the KTT tool.
Third, we propose an image processing framework Umpalumpa, which combines a task-based
runtime system, data-centric architecture, and dynamic autotuning. The proposed framework allows for
writing complex workflows which automatically use available HW resources and adjust to different HW
and data but at the same time are easy to maintainThe project that gave rise to these results received the support of a fellowship from the “la Caixa”
Foundation (ID 100010434). The fellowship code is LCF/BQ/DI18/11660021.
This project has received funding from the European Union’s Horizon 2020 research and innovation
programme under the Marie Skłodowska-Curie grant agreement No. 71367