89 research outputs found

    Shaping bacterial population behavior through computer-interfaced control of individual cells

    Get PDF
    This is the final version. Available from Springer Nature via the DOI in this record.Strains and data are available from the authors upon request. Custom scripts for the described setup are available as Supplementary Software.Bacteria in groups vary individually, and interact with other bacteria and the environment to produce population-level patterns of gene expression. Investigating such behavior in detail requires measuring and controlling populations at the single-cell level alongside precisely specified interactions and environmental characteristics. Here we present an automated, programmable platform that combines image-based gene expression and growth measurements with on-line optogenetic expression control for hundreds of individual Escherichia coli cells over days, in a dynamically adjustable environment. This integrated platform broadly enables experiments that bridge individual and population behaviors. We demonstrate: (i) population structuring by independent closed-loop control of gene expression in many individual cells, (ii) cell-cell variation control during antibiotic perturbation, (iii) hybrid bio-digital circuits in single cells, and freely specifiable digital communication between individual bacteria. These examples showcase the potential for real-time integration of theoretical models with measurement and control of many individual cells to investigate and engineer microbial population behavior.European Union's Seventh Frame ProgrammeAustrian Science FundAgence Nationale de la RechercheAgence Nationale de la RechercheAgence Nationale de la Recherch

    Information transmission in genetic regulatory networks: a review

    Full text link
    Genetic regulatory networks enable cells to respond to the changes in internal and external conditions by dynamically coordinating their gene expression profiles. Our ability to make quantitative measurements in these biochemical circuits has deepened our understanding of what kinds of computations genetic regulatory networks can perform and with what reliability. These advances have motivated researchers to look for connections between the architecture and function of genetic regulatory networks. Transmitting information between network's inputs and its outputs has been proposed as one such possible measure of function, relevant in certain biological contexts. Here we summarize recent developments in the application of information theory to gene regulatory networks. We first review basic concepts in information theory necessary to understand recent work. We then discuss the functional complexity of gene regulation which arrises from the molecular nature of the regulatory interactions. We end by reviewing some experiments supporting the view that genetic networks responsible for early development of multicellular organisms might be maximizing transmitted 'positional' information.Comment: Submitted to J Phys: Condens Matter, 31 page

    Information capacity of genetic regulatory elements

    Full text link
    Changes in a cell's external or internal conditions are usually reflected in the concentrations of the relevant transcription factors. These proteins in turn modulate the expression levels of the genes under their control and sometimes need to perform non-trivial computations that integrate several inputs and affect multiple genes. At the same time, the activities of the regulated genes would fluctuate even if the inputs were held fixed, as a consequence of the intrinsic noise in the system, and such noise must fundamentally limit the reliability of any genetic computation. Here we use information theory to formalize the notion of information transmission in simple genetic regulatory elements in the presence of physically realistic noise sources. The dependence of this "channel capacity" on noise parameters, cooperativity and cost of making signaling molecules is explored systematically. We find that, at least in principle, capacities higher than one bit should be achievable and that consequently genetic regulation is not limited the use of binary, or "on-off", components.Comment: 17 pages, 9 figure

    Optimizing information flow in small genetic networks. II: Feed forward interactions

    Get PDF
    Central to the functioning of a living cell is its ability to control the readout or expression of information encoded in the genome. In many cases, a single transcription factor protein activates or represses the expression of many genes. As the concentration of the transcription factor varies, the target genes thus undergo correlated changes, and this redundancy limits the ability of the cell to transmit information about input signals. We explore how interactions among the target genes can reduce this redundancy and optimize information transmission. Our discussion builds on recent work [Tkacik et al, Phys Rev E 80, 031920 (2009)], and there are connections to much earlier work on the role of lateral inhibition in enhancing the efficiency of information transmission in neural circuits; for simplicity we consider here the case where the interactions have a feed forward structure, with no loops. Even with this limitation, the networks that optimize information transmission have a structure reminiscent of the networks found in real biological systems

    Thermodynamics of natural images

    Get PDF
    The scale invariance of natural images suggests an analogy to the statistical mechanics of physical systems at a critical point. Here we examine the distribution of pixels in small image patches and show how to construct the corresponding thermodynamics. We find evidence for criticality in a diverging specific heat, which corresponds to large fluctuations in how "surprising" we find individual images, and in the quantitative form of the entropy vs. energy. The energy landscape derived from our thermodynamic framework identifies special image configurations that have intrinsic error correcting properties, and neurons which could detect these features have a strong resemblance to the cells found in primary visual cortex

    Fast, scalable, Bayesian spike identification for multi-electrode arrays

    Get PDF
    We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human intervention is minimized and streamlined via a graphical interface. We illustrate our method on data from a mammalian retina preparation and document its performance on simulated data consisting of spikes added to experimentally measured background noise. The algorithm is highly accurate

    Natural images from the birthplace of the human eye

    Get PDF
    Here we introduce a database of calibrated natural images publicly available through an easy-to-use web interface. Using a Nikon D70 digital SLR camera, we acquired about 5000 six-megapixel images of Okavango Delta of Botswana, a tropical savanna habitat similar to where the human eye is thought to have evolved. Some sequences of images were captured unsystematically while following a baboon troop, while others were designed to vary a single parameter such as aperture, object distance, time of day or position on the horizon. Images are available in the raw RGB format and in grayscale. Images are also available in units relevant to the physiology of human cone photoreceptors, where pixel values represent the expected number of photoisomerizations per second for cones sensitive to long (L), medium (M) and short (S) wavelengths. This database is distributed under a Creative Commons Attribution-Noncommercial Unported license to facilitate research in computer vision, psychophysics of perception, and visual neuroscience.Comment: Submitted to PLoS ON

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Methods in Molecular Biology

    Get PDF
    Developmental processes are inherently dynamic and understanding them requires quantitative measurements of gene and protein expression levels in space and time. While live imaging is a powerful approach for obtaining such data, it is still a challenge to apply it over long periods of time to large tissues, such as the embryonic spinal cord in mouse and chick. Nevertheless, dynamics of gene expression and signaling activity patterns in this organ can be studied by collecting tissue sections at different developmental stages. In combination with immunohistochemistry, this allows for measuring the levels of multiple developmental regulators in a quantitative manner with high spatiotemporal resolution. The mean protein expression levels over time, as well as embryo-to-embryo variability can be analyzed. A key aspect of the approach is the ability to compare protein levels across different samples. This requires a number of considerations in sample preparation, imaging and data analysis. Here we present a protocol for obtaining time course data of dorsoventral expression patterns from mouse and chick neural tube in the first 3 days of neural tube development. The described workflow starts from embryo dissection and ends with a processed dataset. Software scripts for data analysis are included. The protocol is adaptable and instructions that allow the user to modify different steps are provided. Thus, the procedure can be altered for analysis of time-lapse images and applied to systems other than the neural tube

    Trade-Offs and Constraints in Allosteric Sensing

    Get PDF
    Sensing extracellular changes initiates signal transduction and is the first stage of cellular decision-making. Yet relatively little is known about why one form of sensing biochemistry has been selected over another. To gain insight into this question, we studied the sensing characteristics of one of the biochemically simplest of sensors: the allosteric transcription factor. Such proteins, common in microbes, directly transduce the detection of a sensed molecule to changes in gene regulation. Using the Monod-Wyman-Changeux model, we determined six sensing characteristics – the dynamic range, the Hill number, the intrinsic noise, the information transfer capacity, the static gain, and the mean response time – as a function of the biochemical parameters of individual sensors and of the number of sensors. We found that specifying one characteristic strongly constrains others. For example, a high dynamic range implies a high Hill number and a high capacity, and vice versa. Perhaps surprisingly, these constraints are so strong that most of the space of characteristics is inaccessible given biophysically plausible ranges of parameter values. Within our approximations, we can calculate the probability distribution of the numbers of input molecules that maximizes information transfer and show that a population of one hundred allosteric transcription factors can in principle distinguish between more than four bands of input concentrations. Our results imply that allosteric sensors are unlikely to have been selected for high performance in one sensing characteristic but for a compromise in the performance of many
    corecore