3,911 research outputs found

    Error Resilient Video Coding Using Bitstream Syntax And Iterative Microscopy Image Segmentation

    Get PDF
    There has been a dramatic increase in the amount of video traffic over the Internet in past several years. For applications like real-time video streaming and video conferencing, retransmission of lost packets is often not permitted. Popular video coding standards such as H.26x and VPx make use of spatial-temporal correlations for compression, typically making compressed bitstreams vulnerable to errors. We propose several adaptive spatial-temporal error concealment approaches for subsampling-based multiple description video coding. These adaptive methods are based on motion and mode information extracted from the H.26x video bitstreams. We also present an error resilience method using data duplication in VPx video bitstreams. A recent challenge in image processing is the analysis of biomedical images acquired using optical microscopy. Due to the size and complexity of the images, automated segmentation methods are required to obtain quantitative, objective and reproducible measurements of biological entities. In this thesis, we present two techniques for microscopy image analysis. Our first method, “Jelly Filling” is intended to provide 3D segmentation of biological images that contain incompleteness in dye labeling. Intuitively, this method is based on filling disjoint regions of an image with jelly-like fluids to iteratively refine segments that represent separable biological entities. Our second method selectively uses a shape-based function optimization approach and a 2D marked point process simulation, to quantify nuclei by their locations and sizes. Experimental results exhibit that our proposed methods are effective in addressing the aforementioned challenges

    Stochastic Particle Barcoding for Single-Cell Tracking and Multiparametric Analysis

    Get PDF
    This study presents stochastic particle barcoding (SPB), a method for tracking cell identity across bioanalytical platforms. In this approach, single cells or small collections of cells are co-encapsulated within an enzymatically-degradable hydrogel block along with a random collection of fluorescent beads, whose number, color, and position encode the identity of the cell, enabling samples to be transferred in bulk between single-cell assay platforms without losing the identity of individual cells. The application of SPB is demonstrated for transferring cells from a subnanoliter protein secretion/phenotyping array platform into a microtiter plate, with re-identification accuracies in the plate assay of 96±2%. Encapsulated cells are recovered by digesting the hydrogel, allowing subsequent genotyping and phenotyping of cell lysates. Finally, a model scaling is developed to illustrate how different parameters affect the accuracy of SPB and to motivate scaling of the method to thousands of unique blocks.Ragon Institute of MGH, MIT and HarvardNational Cancer Institute (U.S.) (Koch Institute Support (Core) Grant P30-CA14051)National Institutes of Health (U.S.). Ruth L. Kirschstein National Research Service Award (1F32CA180586

    Advances in Quantum Teleportation

    Get PDF
    Quantum teleportation is one of the most important protocols in quantum information. By exploiting the physical resource of entanglement, quantum teleportation serves as a key primitive in a variety of quantum information tasks and represents an important building block for quantum technologies, with a pivotal role in the continuing progress of quantum communication, quantum computing and quantum networks. Here we review the basic theoretical ideas behind quantum teleportation and its variant protocols. We focus on the main experiments, together with the technical advantages and disadvantages associated with the use of the various technologies, from photonic qubits and optical modes to atomic ensembles, trapped atoms, and solid-state systems. Analysing the current state-of-the-art, we finish by discussing open issues, challenges and potential future implementations.Comment: Nature Photonics Review. Comments are welcome. This is a slightly-expanded arXiv version (14 pages, 5 figure, 1 table

    Raster to vector conversion: creating an unique handprint each time

    Get PDF
    When a person composes a document by hand, there is random variability in what is produced. That is, every letter is different from all others. If the person produces seven a s, none will be the same. This is not true when a computer prints something. When the computer produces seven a s they are all exactly the same. However, even with the variability inherent in a person s handwriting, when two people write something and they are compared side by side, they often appear as different as fonts from two computer families. In fact, if the two were intermixed to produce some text that has characters from each hand, it would not look right! The goal of this application is to improve the ability to digitally create testing materials (i. e., data collection documents) that give the appearance of being filled out manually (that is, by a person). We developed a set of capabilities that allow us to generate digital test decks using a raster database of handprinted characters, organized into hands (a single person s handprint). We wish to expand these capabilities using vector characters. The raster database has much utility to produce digital test deck materials. Vector characters, it is hoped, will allow greater control to morph the digital test data, within certain constraints. The long-term goal is to have a valid set of computer-generated hands that is virtually indistinguishable from characters created by a person

    Reliability Monitoring Based on Higher-Order Statistics: A Scalable Proposal for the Smart Grid

    Get PDF
    The increasing development of the smart grid demands reliable monitoring of the power quality at different levels, introducing more and more measurement points. In this framework, the advanced metering infrastructure must deal with this large amount of data, storage capabilities, improving visualization, and introducing customer-oriented interfaces. This work proposes a method that optimizes the smart grid data, monitoring the real voltage supplied based on higher order statistics. The method proposes monitoring the network from a scalable point of view and offers a two-fold perspective based on the duality utility-prosumer as a function of the measurement time. A global PQ index and 2D graphs are introduced in order to compress the time domain information and quantify the deviations of the waveform shape by means of three parameters. Time-scalability allows two extra features: long-term supply reliability and power quality in the short term. As a case study, the work illustrates a real-life monitoring in a building connection point, offering 2D diagrams, which show time and space compression capabilities, as well

    Low bit-rate image sequence coding

    Get PDF
    • 

    corecore