11,935 research outputs found

    Motion denoising with application to time-lapse photography

    Get PDF
    Motions can occur over both short and long time scales. We introduce motion denoising, which treats short-term changes as noise, long-term changes as signal, and re-renders a video to reveal the underlying long-term events. We demonstrate motion denoising for time-lapse videos. One of the characteristics of traditional time-lapse imagery is stylized jerkiness, where short-term changes in the scene appear as small and annoying jitters in the video, often obfuscating the underlying temporal events of interest. We apply motion denoising for resynthesizing time-lapse videos showing the long-term evolution of a scene with jerky short-term changes removed. We show that existing filtering approaches are often incapable of achieving this task, and present a novel computational approach to denoise motion without explicit motion analysis. We demonstrate promising experimental results on a set of challenging time-lapse sequences.United States. National Geospatial-Intelligence Agency (NEGI-1582-04-0004)Shell ResearchUnited States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-06-1-0734)National Science Foundation (U.S.) (0964004

    Dynamic Analysis of Vascular Morphogenesis Using Transgenic Quail Embryos

    Get PDF
    Background: One of the least understood and most central questions confronting biologists is how initially simple clusters or sheet-like cell collectives can assemble into highly complex three-dimensional functional tissues and organs. Due to the limits of oxygen diffusion, blood vessels are an essential and ubiquitous presence in all amniote tissues and organs. Vasculogenesis, the de novo self-assembly of endothelial cell (EC) precursors into endothelial tubes, is the first step in blood vessel formation [1]. Static imaging and in vitro models are wholly inadequate to capture many aspects of vascular pattern formation in vivo, because vasculogenesis involves dynamic changes of the endothelial cells and of the forming blood vessels, in an embryo that is changing size and shape. Methodology/Principal Findings: We have generated Tie1 transgenic quail lines Tg(tie1:H2B-eYFP) that express H2B-eYFP in all of their endothelial cells which permit investigations into early embryonic vascular morphogenesis with unprecedented clarity and insight. By combining the power of molecular genetics with the elegance of dynamic imaging, we follow the precise patterning of endothelial cells in space and time. We show that during vasculogenesis within the vascular plexus, ECs move independently to form the rudiments of blood vessels, all while collectively moving with gastrulating tissues that flow toward the embryo midline. The aortae are a composite of somatic derived ECs forming its dorsal regions and the splanchnic derived ECs forming its ventral region. The ECs in the dorsal regions of the forming aortae exhibit variable mediolateral motions as they move rostrally; those in more ventral regions show significant lateral-to-medial movement as they course rostrally. Conclusions/Significance: The present results offer a powerful approach to the major challenge of studying the relative role(s) of the mechanical, molecular, and cellular mechanisms of vascular development. In past studies, the advantages of the molecular genetic tools available in mouse were counterbalanced by the limited experimental accessibility needed for imaging and perturbation studies. Avian embryos provide the needed accessibility, but few genetic resources. The creation of transgenic quail with labeled endothelia builds upon the important roles that avian embryos have played in previous studies of vascular development

    Computational Video Enhancement

    Get PDF
    During a video, each scene element is often imaged many times by the sensor. I propose that by combining information from each captured frame throughout the video it is possible to enhance the entire video. This concept is the basis of computational video enhancement. In this dissertation, the viability of computational video processing is explored in addition to presenting applications where this processing method can be leveraged. Spatio-temporal volumes are employed as a framework for efficient computational video processing, and I extend them by introducing sheared volumes. Shearing provides spatial frame warping for alignment between frames, allowing temporally-adjacent samples to be processed using traditional editing and filtering approaches. An efficient filter-graph framework is presented to support this processing along with a prototype video editing and manipulation tool utilizing that framework. To demonstrate the integration of samples from multiple frames, I introduce methods for improving poorly exposed low-light videos to achieve improved results. This integration is guided by a tone-mapping process to determine spatially-varying optimal exposures and an adaptive spatio-temporal filter to integrate the samples. Low-light video enhancement is also addressed in the multispectral domain by combining visible and infrared samples. This is facilitated by the use of a novel multispectral edge-preserving filter to enhance only the visible spectrum video. Finally, the temporal characteristics of videos are altered by a computational video resampling process. By resampling the video-rate footage, novel time-lapse sequences are found that optimize for user-specified characteristics. Each resulting shorter video is a more faithful summary of the original source than a traditional time-lapse video. Simultaneously, new synthetic exposures are generated to alter the output video's aliasing characteristics

    Learning Temporal Transformations From Time-Lapse Videos

    Full text link
    Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.Comment: ECCV201

    Reconstructing the Forest of Lineage Trees of Diverse Bacterial Communities Using Bio-inspired Image Analysis

    Full text link
    Cell segmentation and tracking allow us to extract a plethora of cell attributes from bacterial time-lapse cell movies, thus promoting computational modeling and simulation of biological processes down to the single-cell level. However, to analyze successfully complex cell movies, imaging multiple interacting bacterial clones as they grow and merge to generate overcrowded bacterial communities with thousands of cells in the field of view, segmentation results should be near perfect to warrant good tracking results. We introduce here a fully automated closed-loop bio-inspired computational strategy that exploits prior knowledge about the expected structure of a colony's lineage tree to locate and correct segmentation errors in analyzed movie frames. We show that this correction strategy is effective, resulting in improved cell tracking and consequently trustworthy deep colony lineage trees. Our image analysis approach has the unique capability to keep tracking cells even after clonal subpopulations merge in the movie. This enables the reconstruction of the complete Forest of Lineage Trees (FLT) representation of evolving multi-clonal bacterial communities. Moreover, the percentage of valid cell trajectories extracted from the image analysis almost doubles after segmentation correction. This plethora of trustworthy data extracted from a complex cell movie analysis enables single-cell analytics as a tool for addressing compelling questions for human health, such as understanding the role of single-cell stochasticity in antibiotics resistance without losing site of the inter-cellular interactions and microenvironment effects that may shape it

    Computational illumination for high-speed in vitro Fourier ptychographic microscopy

    Full text link
    We demonstrate a new computational illumination technique that achieves large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either large field of view (FOV) or high resolution, not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both wide FOV and high resolution, i.e. large space-bandwidth product (SBP). FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (on the order of minutes), limiting throughput. Faster capture times would not only improve imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g. pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with sub-second capture times. We propose an improved algorithm and new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time

    Collective motion of cells: from experiments to models

    Get PDF
    Swarming or collective motion of living entities is one of the most common and spectacular manifestations of living systems having been extensively studied in recent years. A number of general principles have been established. The interactions at the level of cells are quite different from those among individual animals therefore the study of collective motion of cells is likely to reveal some specific important features which are overviewed in this paper. In addition to presenting the most appealing results from the quickly growing related literature we also deliver a critical discussion of the emerging picture and summarize our present understanding of collective motion at the cellular level. Collective motion of cells plays an essential role in a number of experimental and real-life situations. In most cases the coordinated motion is a helpful aspect of the given phenomenon and results in making a related process more efficient (e.g., embryogenesis or wound healing), while in the case of tumor cell invasion it appears to speed up the progression of the disease. In these mechanisms cells both have to be motile and adhere to one another, the adherence feature being the most specific to this sort of collective behavior. One of the central aims of this review is both presenting the related experimental observations and treating them in the light of a few basic computational models so as to make an interpretation of the phenomena at a quantitative level as well.Comment: 24 pages, 25 figures, 13 reference video link

    High-speed in vitro intensity diffraction tomography

    Get PDF
    We demonstrate a label-free, scan-free intensity diffraction tomography technique utilizing annular illumination (aIDT) to rapidly characterize large-volume three-dimensional (3-D) refractive index distributions in vitro. By optimally matching the illumination geometry to the microscope pupil, our technique reduces the data requirement by 60 times to achieve high-speed 10-Hz volume rates. Using eight intensity images, we recover volumes of ∼350 μm  ×  100 μm  ×  20  μm, with near diffraction-limited lateral resolution of   ∼  487  nm and axial resolution of   ∼  3.4  μm. The attained large volume rate and high-resolution enable 3-D quantitative phase imaging of complex living biological samples across multiple length scales. We demonstrate aIDT’s capabilities on unicellular diatom microalgae, epithelial buccal cell clusters with native bacteria, and live Caenorhabditis elegans specimens. Within these samples, we recover macroscale cellular structures, subcellular organelles, and dynamic micro-organism tissues with minimal motion artifacts. Quantifying such features has significant utility in oncology, immunology, and cellular pathophysiology, where these morphological features are evaluated for changes in the presence of disease, parasites, and new drug treatments. Finally, we simulate the aIDT system to highlight the accuracy and sensitivity of the proposed technique. aIDT shows promise as a powerful high-speed, label-free computational microscopy approach for applications where natural imaging is required to evaluate environmental effects on a sample in real time.https://arxiv.org/abs/1904.06004Accepted manuscrip
    • …
    corecore