27,483 research outputs found

    Low-Cost Motility Tracking System (LOCOMOTIS) for time-lapse microscopy applications and cell visualisation

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6x. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8x). In preliminary cell culture experiments using our system, velocities (mean mm/min ± SE) of 0.81±0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17±0.004 (MDA-MB-231 breast cancer cells), 1.24±0.006 (SC5 mouse Sertoli cells) and 2.21±0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers. © 2014 Lynch et al

    Advanced optical imaging in living embryos

    Get PDF
    Developmental biology investigations have evolved from static studies of embryo anatomy and into dynamic studies of the genetic and cellular mechanisms responsible for shaping the embryo anatomy. With the advancement of fluorescent protein fusions, the ability to visualize and comprehend how thousands to millions of cells interact with one another to form tissues and organs in three dimensions (xyz) over time (t) is just beginning to be realized and exploited. In this review, we explore recent advances utilizing confocal and multi-photon time-lapse microscopy to capture gene expression, cell behavior, and embryo development. From choosing the appropriate fluorophore, to labeling strategy, to experimental set-up, and data pipeline handling, this review covers the various aspects related to acquiring and analyzing multi-dimensional data sets. These innovative techniques in multi-dimensional imaging and analysis can be applied across a number of fields in time and space including protein dynamics to cell biology to morphogenesis

    Synthetic Aperture Radar (SAR) data processing

    Get PDF
    The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed

    Four Decades of Computing in Subnuclear Physics - from Bubble Chamber to LHC

    Full text link
    This manuscript addresses selected aspects of computing for the reconstruction and simulation of particle interactions in subnuclear physics. Based on personal experience with experiments at DESY and at CERN, I cover the evolution of computing hardware and software from the era of track chambers where interactions were recorded on photographic film up to the LHC experiments with their multi-million electronic channels

    A Method for Neuronal Source Identification

    Full text link
    Multi-sensor microelectrodes for extracellular action potential recording have significantly improved the quality of in vivo recorded neuronal signals. These microelectrodes have also been instrumental in the localization of neuronal signal sources. However, existing neuron localization methods have been mostly utilized in vivo, where the true neuron location remains unknown. Therefore, these methods could not be experimentally validated. This article presents experimental validation of a method capable of estimating both the location and intensity of an electrical signal source. A four-sensor microelectrode (tetrode) immersed in a saline solution was used to record stimulus patterns at multiple intensity levels generated by a stimulating electrode. The location of the tetrode was varied with respect to the stimulator. The location and intensity of the stimulator were estimated using the Multiple Signal Classification (MUSIC) algorithm, and the results were quantified by comparison to the true values. The localization results, with an accuracy and precision of ~ 10 microns, and ~ 11 microns respectively, imply that MUSIC can resolve individual neuronal sources. Similarly, source intensity estimations indicate that this approach can track changes in signal amplitude over time. Together, these results suggest that MUSIC can be used to characterize neuronal signal sources in vivo.Comment: 14 pages, 5 figure

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
    corecore