757 research outputs found

    Spatial gene drives and pushed genetic waves

    Full text link
    Gene drives have the potential to rapidly replace a harmful wild-type allele with a gene drive allele engineered to have desired functionalities. However, an accidental or premature release of a gene drive construct to the natural environment could damage an ecosystem irreversibly. Thus, it is important to understand the spatiotemporal consequences of the super-Mendelian population genetics prior to potential applications. Here, we employ a reaction-diffusion model for sexually reproducing diploid organisms to study how a locally introduced gene drive allele spreads to replace the wild-type allele, even though it possesses a selective disadvantage s>0s>0. Using methods developed by N. Barton and collaborators, we show that socially responsible gene drives require 0.5<s<0.6970.5<s<0.697, a rather narrow range. In this "pushed wave" regime, the spatial spreading of gene drives will be initiated only when the initial frequency distribution is above a threshold profile called "critical propagule", which acts as a safeguard against accidental release. We also study how the spatial spread of the pushed wave can be stopped by making gene drives uniquely vulnerable ("sensitizing drive") in a way that is harmless for a wild-type allele. Finally, we show that appropriately sensitized drives in two dimensions can be stopped even by imperfect barriers perforated by a series of gaps

    Covering properties in countable products, II

    Get PDF
    summary:In this paper, we discuss covering properties in countable products of Čech-scattered spaces and prove the following: (1) If YY is a perfect subparacompact space and {Xn:nω}\{X_n : n\in \omega \} is a countable collection of subparacompact Čech-scattered spaces, then the product Y×nωXnY\times \prod_{n\in \omega }X_n is subparacompact and (2) If {Xn:nω}\{X_n : n\in \omega \} is a countable collection of metacompact Čech-scattered spaces, then the product nωXn\prod_{n\in \omega }X_n is metacompact

    Submetacompactness and Weak Submetacompactness in Countable Products, II

    Get PDF

    CORNN: Convex optimization of recurrent neural networks for rapid inference of neural dynamics

    Full text link
    Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations in behaving animals. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). Performing this training in real-time could open doors for research techniques and medical applications to model and control interventions at single-cell resolution and drive desired forms of animal behavior. However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings, CORNN attained training speeds ~100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation.Comment: Accepted at NeurIPS 202
    corecore