161 research outputs found
A Single-Molecule Hershey-Chase Experiment
Ever since Hershey and Chase used phages to establish DNA as the carrier of genetic information in 1952, the precise mechanisms of phage DNA translocation have been a mystery. While bulk measurements have set a time scale for in vivo DNA translocation during bacteriophage infection, measurements of DNA ejection by single bacteriophages have only been made in vitro. Here, we present direct visualization of single bacteriophages infecting individual Escherichia coli cells. For bacteriophage lambda, we establish a mean ejection time of roughly 5 minutes with significant cell-to-cell variability, including pausing events. In contrast, corresponding in vitro single-molecule ejections take only 10 seconds to reach completion and do not exhibit significant variability. Our data reveal that the velocity of ejection for two different genome lengths collapses onto a single curve. This suggests that in vivo ejections are controlled by the amount of DNA ejected, in contrast with in vitro DNA ejections, which are governed by the amount of DNA left inside the capsid. This analysis provides evidence against a purely intrastrand repulsion based mechanism, and suggests that cell-internal processes dominate. This provides a picture of the early stages of phage infection and sheds light on the problem of polymer translocation
Voices in methods development
To mark the 15th anniversary of Nature Methods, we asked scientists from across diverse fields of basic biology research for their views on the most exciting and essential methodological challenges that their communities are poised to tackle in the near future
Voices in methods development
To mark the 15th anniversary of Nature Methods, we asked scientists from across diverse fields of basic biology research for their views on the most exciting and essential methodological challenges that their communities are poised to tackle in the near future
DeepCell 2.0: Automated cloud deployment of deep learning models for large-scale cellular image analysis
Deep learning is transforming the ability of life scientists to extract information from images. While these techniques have superior accuracy in comparison to conventional approaches and enable previously impossible analyses, their unique hardware and software requirements have prevented widespread adoption by life scientists. To meet this need, we have developed DeepCell 2.0, an open source library for training and delivering deep learning models with cloud computing. This library enables users to configure and manage a cloud deployment of DeepCell 2.0 on all commonly used operating systems. Using single-cell segmentation as a use case, we show that users with suitable training data can train models and analyze data with those models through a web interface. We demonstrate that by matching analysis tasks with their hardware requirements, we can efficiently use computational resources in the cloud and scale those resources to meet demand, significantly reducing the time necessary for large-scale image analysis. By reducing the barriers to entry, this work will empower life scientists to apply deep learning methods to their data. A persistent deployment is available at http://www.deepcell.org
Combining Multiplexed Ion Beam Imaging (MIBI) with Convolutional Neural Networks to accurately segment cells in human tissue
Background: Multiplexed imaging is a rapidly growing field that promises to substantially increase the number of proteins that can be imaged simultaneously.
We have developed Multiplexed Ion Beam Imaging by Time of Flight
(MIBI-TOF), which uses elemental reporters conjugated to primary antibodies
that are then quantified using a time of flight mass-spectrometer.
This technique allows for more than 40 distinct proteins to visualized at
once in the same clinical samples. This has already yielded significant insights
into the interactions and relationships between the many different
immune cell populations present in the tumor microenvironment. However,
one of the remaining challenges in analyzing such data is accurately
determining target protein expression values for each cell in the image.
This requires the precise delineation of boundaries between cells that are
often tightly packed next to one another. Current methods to address
this challenge largely rely on DNA intensity to make these splits, and are
thus mostly limited to nuclear segmentation.
Methods:
We have developed a novel convolutional neural network to perform
whole-cell segmentation from multiplexed imaging data. Rather than
relying only on DNA signal, we use a panel of morphological
markers. Our method integrates the information from these distinct
proteins, allowing it to segment large cancer cells, small lymphocytes,
and normal epithelium at the same time without requiring
fine-tuning or manual adjustment.
Results:
By combining our novel imaging platform with new computational
tools, we are able to achieve extremely accurate segmentation of
whole cells in tissue. Our approach compares favorably with many of
the currently used tools for segmentation. We show that our improvements
in accuracy come both from our novel imaging approach as well
as algorithmic advances. We perform significantly better than traditional
machine learning algorithms trained on the same dataset. Additionally,
we show that our algorithm can be trained to identify cells
across a range of cancer histologies and disease grades.
Conclusions:
We have developed a robust and accurate approach to whole-cell
segmentation in human tissues. We show the superiority over this
method over current state of the art algorithms. The accurate segmentation
generated by our approach will enable the analysis of
complex tissue architectures with highly overlapping cell types, and
will help to advance our understanding of the interactions between
cell types in the diseased state
DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes
Deep learning is transforming the analysis of biological images, but applying these models to large datasets remains challenging. Here we describe the DeepCell Kiosk, cloud-native software that dynamically scales deep learning workflows to accommodate large imaging datasets. To demonstrate the scalability and affordability of this software, we identified cell nuclei in 10⁶ 1-megapixel images in ~5.5 h for ~US100 achievable depending on cluster configuration. The DeepCell Kiosk can be downloaded at https://github.com/vanvalenlab/kiosk-console; a persistent deployment is available at https://deepcell.org/
Joint phenotypes, evolutionary conflict and the fundamental theorem of natural selection
Multiple organisms can sometimes affect a common phenotype. For example, the portion of a leaf eaten by an insect is a joint phenotype of the plant and insect and the amount of food obtained by an offspring can be a joint trait with its mother. Here, I describe the evolution of joint phenotypes in quantitative genetic terms. A joint phenotype for multiple species evolves as the sum of additive genetic variances in each species, weighted by the selection on each species. Selective conflict between the interactants occurs when selection takes opposite signs on the joint phenotype. The mean fitness of a population changes not just through its own genetic variance but also through the genetic variance for its fitness that resides in other species, an update of Fisher\u27s fundamental theorem of natural selection. Some similar results, using inclusive fitness, apply to within-species interactions. The models provide a framework for understanding evolutionary conflicts at all levels
- …