15 research outputs found

    Short random circuits define good quantum error correcting codes

    Full text link
    We study the encoding complexity for quantum error correcting codes with large rate and distance. We prove that random Clifford circuits with O(nlog2n)O(n \log^2 n) gates can be used to encode kk qubits in nn qubits with a distance dd provided kn<1dnlog23h(dn)\frac{k}{n} < 1 - \frac{d}{n} \log_2 3 - h(\frac{d}{n}). In addition, we prove that such circuits typically have a depth of O(log3n)O( \log^3 n).Comment: 5 page

    Automatic extraction and tracking of face sequences in MPEG video

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Real-time scalable video coding for surveillance applications on embedded architectures

    Get PDF

    Novel cardiovascular magnetic resonance phenotyping of the myocardium

    Get PDF
    INTRODUCTION Left ventricular (LV) microstructure is unique, composed of a winding helical pattern of myocytes and rotating aggregations of myocytes called sheetlets. Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease characterised by left ventricular hypertrophy (LVH), however the link between LVH and underlying microstructural aberration is poorly understood. In vivo cardiovascular diffusion tensor imaging (cDTI) is a novel cardiovascular MRI (CMR) technique, capable of characterising LV microstructural dynamics non-invasively. In vivo cDTI may therefore improve our understanding microstructural-functional relationships in health and disease. METHODS AND RESULTS The monopolar diffusion weighted stimulated echo acquisition mode (DW-STEAM) sequence was evaluated for in vivo cDTI acquisitions at 3Tesla, in healthy volunteers (HV), patients with hypertensive LVH, and HCM patients. Results were contextualised in relation to extensively explored technical limitations. cDTI parameters demonstrated good intra-centre reproducibility in HCM, and good inter-centre reproducibility in HV. In all subjects, cDTI was able to depict the winding helical pattern of myocyte orientation known from histology, and the transmural rate of change in myocyte orientation was dependent on LV size and thickness. In HV, comparison of cDTI parameters between systole and diastole revealed an increase in transmural gradient, combined with a significant re-orientation of sheetlet angle. In contrast, in HCM, myocyte gradient increased between phases, however sheetlet angulation retained a systolic-like orientation in both phases. Combined analysis with hypertensive patients revealed a proportional decrease in sheetlet mobility with increasing LVH. CONCLUSION In vivo DW-STEAM cDTI can characterise LV microstructural dynamics non-invasively. The transmural rate of change in myocyte angulation is dependent on LV size and wall thickness, however inter phase changes in myocyte orientation are unaffected by LVH. In contrast, sheetlet dynamics demonstrate increasing dysfunction, in proportion to the degree of LVH. Resolving technical limitations is key to advancing this technique, and improving the understanding of the role of microstructural abnormalities in cardiovascular disease expression.Open Acces

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Biologically-inspired Neural Networks for Shape and Color Representation

    Get PDF
    The goal of human-level performance in artificial vision systems is yet to be achieved. With this goal, a reasonable choice is to simulate this biological system with computational models that mimic its visual processing. A complication with this approach is that the human brain, and similarly its visual system, are not fully understood. On the bright side, with remarkable findings in the field of visual neuroscience, many questions about visual processing in the primate brain have been answered in the past few decades. Nonetheless, a lag in incorporating these new discoveries into biologically-inspired systems is evident. The present work introduces novel biologically-inspired models that employ new findings of shape and color processing into analytically-defined neural networks. In contrast to most current methods that attempt to learn all aspects of behavior from data, here we propose to bootstrap such learning by building upon existing knowledge rather than learning from scratch. Put simply, the processing networks are defined analytically using current neural understanding and learned where such knowledge is not available. This is thus a hybrid strategy that hopefully combines the best of both worlds. Experiments on the artificial neurons in the proposed networks demonstrate that these neurons mimic the studied behavior of biological cells, suggesting a path forward for incorporating analytically-defined artificial neural networks into computer vision systems

    Ride the Tide: Observing CRISPR/Cas9 genome editing by the numbers

    Get PDF
    Targeted genome editing has become a powerful genetic tool for modification of DNA sequences in their natural chromosomal context. CRISPR RNA-guided nucleases have recently emerged as an efficient targeted editing tool for multiple organisms. Hereby a double strand break is introduced at a targeted DNA site. During DNA repair genomic alterations are introduced which can change the function of the DNA code. However, our understanding of how CRISPR works is incomplete and it is still hard to predict the CRISPR activity at the precise target sites. The highly ordered structure of the eukaryotic genome may play a role in this. The organization of the genome is controlled by dynamic changes of DNA methylation, histone modification, histone variant incorporation and nucleosome remodelling. The influence of nuclear organization and chromatin structure on transcription is reasonably well known, but we are just beginning to understand its effect on genome editing by CRISP

    Apport de la Qualité de l’Expérience dans l’optimisation de services multimédia : application à la diffusion de la vidéo et à la VoIP

    Get PDF
    The emerging and fast growth of multimedia services have created new challenges for network service providers in order to guarantee the best user's Quality of Experience (QoE) in diverse networks with distinctive access technologies. Usually, various methods and techniques are used to predict the user satisfaction level by studying the combined impact of numerous factors. In this thesis, we consider two important multimedia services to evaluate the user perception, which are: video streaming service, and VoIP. This study investigates user's QoE that follows three directions: (1) methodologies for subjective QoE assessment of video services, (2) regulating user's QoE using video a rate adaptive algorithm, and (3) QoE-based power efficient resource allocation methods for Long Term Evaluation-Advanced (LTE-A) for VoIP. Initially, we describe two subjective methods to collect the dataset for assessing the user's QoE. The subjectively collected dataset is used to investigate the influence of different parameters (e.g. QoS, video types, user profile, etc.) on user satisfaction while using the video services. Later, we propose a client-based HTTP rate adaptive video streaming algorithm over TCP protocol to regulate the user's QoE. The proposed method considers three Quality of Service (QoS) parameters that govern the user perception, which are: Bandwidth, Buffer, and dropped Frame rate (BBF). The BBF method dynamically selects the suitable video quality according to network conditions and user's device properties. Lastly, we propose a QoE driven downlink scheduling method, i.e. QoE Power Escient Method (QEPEM) for LTE-A. It esciently allocates the radio resources, and optimizes the use of User Equipment (UE) power utilizing the Discontinuous Reception (DRX) method in LTE-AL'émergence et la croissance rapide des services multimédia dans les réseaux IP ont créé de nouveaux défis pour les fournisseurs de services réseau, qui, au-delà de la Qualité de Service (QoS) issue des paramètres techniques de leur réseau, doivent aussi garantir la meilleure qualité de perception utilisateur (Quality of Experience, QoE) dans des réseaux variés avec différentes technologies d'accès. Habituellement, différentes méthodes et techniques sont utilisées pour prédire le niveau de satisfaction de l'utilisateur, en analysant l'effet combiné de multiples facteurs. Dans cette thèse, nous nous intéressons à la commande du réseau en intégrant à la fois des aspects qualitatifs (perception du niveau de satisfaction de l'usager) et quantitatifs (mesure de paramètres réseau) dans l'objectif de développer des mécanismes capables, à la fois, de s'adapter à la variabilité des mesures collectées et d'améliorer la qualité de perception. Pour ce faire, nous avons étudié le cas de deux services multimédia populaires, qui sont : le streaming vidéo, et la voix sur IP (VoIP). Nous investiguons la QoE utilisateur de ces services selon trois aspects : (1) les méthodologies d'évaluation subjective de la QoE, dans le cadre d'un service vidéo, (2) les techniques d'adaptation de flux vidéo pour garantir un certain niveau de QoE, et (3) les méthodes d'allocation de ressource, tenant compte de la QoE tout en économisant l'énergie, dans le cadre d'un service de VoIP (LTE-A). Nous présentons d'abord deux méthodes pour récolter des jeux de données relatifs à la QoE. Nous utilisons ensuite ces jeux de données (issus des campagnes d'évaluation subjective que nous avons menées) pour comprendre l'influence de différents paramètres (réseau, terminal, profil utilisateur) sur la perception d'un utilisateur d'un service vidéo. Nous proposons ensuite un algorithme de streaming vidéo adaptatif, implémenté dans un client HTTP, et dont le but est d'assurer un certain niveau de QoE et le comparons à l'état de l'art. Notre algorithme tient compte de trois paramètres de QoS (bande passante, taille de mémoires tampons de réception et taux de pertes de paquets) et sélectionne dynamiquement la qualité vidéo appropriée en fonction des conditions du réseau et des propriétés du terminal de l'utilisateur. Enfin, nous proposons QEPEM (QoE Power Efficient Method), un algorithme d'ordonnancement basé sur la QoE, dans le cadre d'un réseau sans fil LTE, en nous intéressant à une allocation dynamique des ressources radio en tenant compte de la consommation énergétiqu
    corecore