97 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Refined Reliability Combining for Binary Message Passing Decoding of Product Codes

    Get PDF
    We propose a novel soft-aided iterative decoding algorithm for product codes (PCs). The proposed algorithm, named iterative bounded distance decoding with combined reliability (iBDD-CR), enhances the conventional iterative bounded distance decoding (iBDD) of PCs by exploiting some level of soft information. In particular, iBDD-CR can be seen as a modification of iBDD where the hard decisions of the row and column decoders are made based on a reliability estimate of the BDD outputs. The reliability estimates are derived using extrinsic message passing for generalized low-density-parity check (GLDPC) ensembles, which encompass PCs. We perform a density evolution analysis of iBDD-CR for transmission over the additive white Gaussian noise channel for the GLDPC ensemble. We consider both binary transmission and bit-interleaved coded modulation with quadrature amplitude modulation.We show that iBDD-CR achieves performance gains up to 0.510.51 dB compared to iBDD with the same internal decoder data flow. This makes the algorithm an attractive solution for very high-throughput applications such as fiber-optic communications

    Roadmap of optical communications

    Get PDF
    © 2016 IOP Publishing Ltd. Lightwave communications is a necessity for the information age. Optical links provide enormous bandwidth, and the optical fiber is the only medium that can meet the modern society's needs for transporting massive amounts of data over long distances. Applications range from global high-capacity networks, which constitute the backbone of the internet, to the massively parallel interconnects that provide data connectivity inside datacenters and supercomputers. Optical communications is a diverse and rapidly changing field, where experts in photonics, communications, electronics, and signal processing work side by side to meet the ever-increasing demands for higher capacity, lower cost, and lower energy consumption, while adapting the system design to novel services and technologies. Due to the interdisciplinary nature of this rich research field, Journal of Optics has invited 16 researchers, each a world-leading expert in their respective subfields, to contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications

    Bio-inspired log-polar based color image pattern analysis in multiple frequency channels

    Get PDF
    The main topic addressed in this thesis is to implement color image pattern recognition based on the lateral inhibition subtraction phenomenon combined with a complex log-polar mapping in multiple spatial frequency channels. It is shown that the individual red, green and blue channels have different recognition performances when put in the context of former work done by Dragan Vidacic. It is observed that the green channel performs better than the other two channels, with the blue channel having the poorest performance. Following the application of a contrast stretching function the object recognition performance is improved in all channels. Multiple spatial frequency filters were designed to simulate the filtering channels that occur in the human visual system. Following these preprocessing steps Dragan Vidacic\u27s methodology is followed in order to determine the benefits that are obtained from the preprocessing steps being investigated. It is shown that performance gains are realized by using such preprocessing steps

    Television Bandwldth Compression

    Get PDF
    Electrical Engineerin

    Manned spacecraft advanced digital television compression study. Volume 1 - Text Final report

    Get PDF
    Manned spacecraft advanced digital television compression stud

    Video coding based on fractals and sparse representations

    Get PDF
    Orientador: Hélio PedriniDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Vídeos são sequências de imagens estáticas representando cenas em movimento. Transmitir e armazenar essas imagens sem nenhum tipo de pré-processamento necessitaria de enormes larguras de banda nos canais de comunicação e uma quantidade massiva de espaço de armazenamento. A fim de reduzir o número de bits necessários para tais dados, foram criados métodos de compressão com perda. Esses métodos geralmente consistem em um codificador e um decodificador, tal que o codificador gera uma sequência de bits que representa uma aproximação razoável do vídeo através de um formato pré-especificado e o decodificador lê essa sequência, convertendo-a novamente em uma série de imagens. A transmissão de vídeos sob restrições extremas de largura de banda tem aplicações importantes como videoconferências e circuitos fechados de televisão. Neste trabalho são abordados dois métodos destinados a essa aplicação, decomposição usando representações esparsas e compressão fractal. A ampla maioria dos codificadores tem como mecanismo principal o uso de transformações inversíveis capazes de representar imagens espacialmente suaves com poucos coeficientes não-nulos. Representações esparsas são uma generalização dessa ideia, em que a transformação tem como base um conjunto cujo número de elementos excede a dimensão do espaço vetorial onde ela opera. A projeção dos dados pode ser feita a partir de uma heurística rápida chamada Matching Pursuit. Uma abordagem combinando essa heurística com um algoritmo para gerar a base sobrecompleta por aprendizado de máquina é apresentada. Codificadores fractais representam uma aproximação da imagem como um sistema de funções iterativas. Para isso, criam e transmitem uma sequência de comandos, chamada colagem, capazes de obter uma representação da imagem na escala original dada a mesma imagem em uma escala reduzida. A colagem é criada de tal forma que, se aplicada a uma imagem inicial qualquer repetidas vezes, reduzindo sua escala antes de toda iteração, converge em uma aproximação da imagem codificada. Métodos simplificados e rápidos para a criação da colagem e uma generalização desses métodos para a compressão de vídeos são apresentados. Ao invés de construir a colagem tentando mapear qualquer bloco da escala reduzida na escala original, apenas um conjunto pequeno de blocos é considerado. O método de compressão proposto para vídeos agrupa um conjunto de quadros consecutivos do vídeo em um fractal volumétrico. A colagem mapeia blocos tridimensionais entre as escalas, considerando uma escala menor tanto no tempo quanto no espaço. Uma adaptação desse método para canais de comunicação cuja largura de banda é instável também é propostaAbstract: A video is a sequence of still images representing scenes in motion. A video is a sequence of extremely similar images separated by abrupt changes in their content. If these images were transmitted and stored without any kind of preprocessing, this would require a massive amount of storage space and communication channels with very high bandwidths. Lossy compression methods were created in order to reduce the number of bits used to represent this kind of data. These methods generally consist in an encoder and a decoder, where the encoder generates a sequence of bits that represents an acceptable approximation of the video using a certain predefined format and the decoder reads this sequence, converting it back into a series of images. Transmitting videos under extremely limited bandwidth has important applications in video conferences or closed-circuit television systems. Two different approaches are explored in this work, decomposition based on sparse representations and fractal coding. Most video coders are based on invertible transforms capable of representing spatially smooth images with few non-zero coeficients. Sparse representations are a generalization of this idea using a transform that has an overcomplete dictionary as a basis. Overcomplete dictionaries are sets with more elements in it than the dimension of the vector space in which the transform operates. The data can be projected into this basis using a fast heuristic called Matching Pursuits. A video encoder combining this fast heuristic with a machine learning algorithm capable of constructing the overcomplete dictionary is proposed. Fractal encoders represent an approximation of the image through an iterated function system. In order to do that, a sequence of instructions, called a collage, is created and transmitted. The collage can construct an approximation of the original image given a smaller scale version of it. It is created in such a way that, when applied to any initial image several times, contracting it before each iteration, it converges into an approximation of the encoded image. Simplier and faster methods for creating a collage and a generalization of these methods to video compression are presented. Instead of constructing a collage by matching any block from the smaller scale to the original one, a small subset of possible matches is considered. The proposed video encoding method creates groups of consecutive frames which are used to construct a volumetric fractal. The collage maps tridimensional blocks between the different scales, using a smaller scale in both space and time. An improved version of this algorithm designed for communication channels with variable bandwidth is presentedMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    Scalable Front End Designs for Communication and Learning

    Get PDF
    In this work we provide three examples of estimation/detection problems, for which customizing the Front End to the specific application makes the system more efficient and scalable. The three problems we consider are all classical, but face new scalability challenges. This introduces additional constraints, accounting for which results in front end designs that are very distinct from the conventional approaches. The first two case studies pertain to the canonical problems of synchronization and equalization for communication links. As the system bandwidths scale, challenges arise due to the limiting resolution of analog-to-digital converters (ADCs). We discuss system designs that react to this bottleneck by drastically relaxing the precision requirements of the front end and correspondingly modifying the back end algorithms using Bayesian principles. The third problem we discuss belongs to the field of computer vision. Inspired by the research in neuroscience about the mammalian visual system, we redesign the front end of a machine vision system to be neuro-mimetic, followed by layers of unsupervised learning using simple k-means clustering. This results in a framework that is intuitive, more computationally efficient compared to the approach of supervised deep networks, and amenable to the increasing availability of large amounts of unlabeled data. We first consider the problem of blind carrier phase and frequency synchronization in order to obtain insight into the performance limitations imposed by severe quantization constraints. We adopt a mixed signal analog front end that coarsely quantizes the phase and employs a digitally controlled feedback that applies a phase shift prior to the ADC, this acts as a controllable dither signal and aids in the estimation process. We propose a control policy for the feedback and show that combined with blind Bayesian algorithms, it results in excellent performance, close to that of an unquantized system.Next, we take up the problem of channel equalization with severe limits on the number of slicers available for the ADC. We find that the standard flash ADC architecture can be highly sub-optimal in the presence of such constraints. Hence we explore a ``space-time'' generalization of the flash architecture by allowing a fixed numberof slicers to be dispersed in time (sampling phase) as well as space (i.e., amplitude). We show that optimizing the slicer locations, conditioned on the channel, results in significant gains in the bit error rate (BER) performance. Finally, we explore alternative ways of learning convolutionalnets for machine vision, making it easier to interpret and simpler to implement than currently used purely supervised nets. In particular, we investigate a framework that combines a neuro-mimetic front end (designed in collaboration with the neuroscientists from the psychology department at UCSB) together with unsupervised feature extraction based on clustering. Supervised classification, using a generic support vector machine (SVM), is applied at the end.We obtain competitive classification results on standard image databases, beating the state of the art for NORB (uniform-normalized) and approaching it for MNIST

    Integrated photonic transmitters for secure space quantum communication

    Get PDF
    An important issue in today's information society is the security of data transmission against potential intruders, which always put at risk the confidentiality. Current methods to increase security require that the two parties wishing to transmit information, exchange or share one or more security keys. Once the key has been identified, the information can be transferred in a provable secure way using a one-time pad, i. e. the key length is as long as the plaintext. Therefore, the security of the information transmission is based exclusively on the security of the key exchange. Quantum cryptography, or more precisely quantum key distribution (QKD), guarantees absolutely secure key distribution based on the principles of quantum physics, according to which it is not possible to measure or reproduce a state (e.g. polarization or phase of a photon) without being detected. The key is generated out from the measurement of the information encoded into specific quantum states of a photon, named qubits. For example, a qubit can be created using properties such as the polarization or the phase of a photon. Achieved goals of this thesis are the development of a new class of high speed integrated photonic sources for applications in quantum key distribution systems, capable of producing unprecedented qubit rates (100 Mbps - 1 Gbps) and transmitting those over larger distances than those achieved so far (>200 km). More specifically the work has been focused on developing faint pulse sources which can be used in very demanding environmental conditions, such as those in Space. For the development of these sources, apart from the optical design, essential is the opto-mechanical engineering as well as the integration with the electronics. One of the objectives was to achieve very high level of integration and power efficiency, e.g. volumes and power consumption between 10 and 100 times smaller than those typical of a laboratory experiment. Moreover, work in related parts of a whole QKD transmission system has been carried out. In particular, a new scheme for a compact, fast and simple random number generator has been demonstrated successfully achieving a random number generation rate of 1.1 Gbps. Also, during the course of this thesis, the development and engineering of a free-space QKD optical link has been initiated. This thesis makes use of novel ideas to alternatively demonstrate proof-of-concept experiments, which could then further develop into commercial products. To this end, close collaborations with world-wide leading companies in the field have been established. The Optoelectronics Group at ICFO has been involved in current European Space Agency (ESA) projects to develop a small footprint and low power consumption quantum transceiver and a high-flux entangled photon source.En l’actual societat del coneixement és important la seguretat en la transmissió de dades contra potencial intrusos, els quals sempre posen en risc la confidencialitat. Mètodes actuals per incrementar la seguretat requereixen que les dos parts que volen transmetre informació, intercanviïn o comparteixin una o més claus. Una vegada la clau ha estat identificada, la informació pot ser transferida de forma provadament segura utilitzant ”‘one-time pad”’. Per tant, la seguretat en la transmissió de la informació es basa exclusivament en la seguretat en l’intercanvi de la clau. La criptografia quàntica, o més precisament distribució de clau quàntica (QKD), garanteix absolutament la seguretat de la distribució de la clau basant-se en els principis de la física quàntica, segons la qual no és possible mesurar o reproduir un estat (p. e. la polarització o fase d’un fotó) sense ser detectat. La clau es genera a partir de les mesures de la informació codificada en estat quàntics del fotó, anomenats qubits. Per exemple, un qubit pot ser creat utilitzant propietats com la polarització o fase d’un fotó. Els objectius aconseguits d’aquesta tesis són el desenvolupament d’una nova classe d’emissors fotònics d’alta velocitat per a aplicacions en sistemes de distribució de clau quàntica, capaç¸os de produir velocitats de qubit sense precedents (100 Mbps - 1 Gbps) i transmetre’ls a través de distàncies més llunyanes que les aconseguides fins ara (> 200 Km). Més en concret el treball s’ha centrat en el desenvolupament de fonts de pulsos atenuats que poden ser usades en condicions ambientals molt extremes, com les presents a l’Espai. Per al desenvolupament d’aquestes fonts, apart del disseny òptic, importantíssim es l’enginyeria optomecànica com també la integració amb la electrònica. Un dels objectius ha estat aconseguir un molt alt nivell de integració i eficiència de potència, p. e. volums i consums de potència entre 10 i 100 vegades més petits que els típics en experiments de laboratori. Ademés, s’ha realitzat treball en altres parts relacionades amb un sistema de transmissió QKD. En particular, un nou esquema per a un generador de números aleatori compacte, ràpid i simple ha estat positivament demostrat aconseguint velocitats de generació de números aleatoris de 1:1 Gbps. També, el desenvolupament i enginyeria d’un enllaç òptic per a QKD en espai lliure ha estat iniciat durant aquesta tesis. Aquesta tesis utilitza idees novedoses per a demostrar experiments de prova de concepte, els quals poden esdevenir en productes comercials. Per a aquest fi, s’han establert col•laboracions amb empreses internacionals líders del sector. A més a més, el Grup d’Optoelectrònica de ICFO ha estat involucrat en projectes de la Agència Espacial Europea (ESA) per a desenvolupar un transceptor quàntic de tamany reduït i baix consum de potència, el qual també conté una font de fotons entrellaçts d’alt flux
    corecore