596 research outputs found

    On coding labeled trees

    Get PDF
    Trees are probably the most studied class of graphs in Computer Science. In this thesis we study bijective codes that represent labeled trees by means of string of node labels. We contribute to the understanding of their algorithmic tractability, their properties, and their applications. The thesis is divided into two parts. In the first part we focus on two types of tree codes, namely Prufer-like codes and Transformation codes. We study optimal encoding and decoding algorithms, both in a sequential and in a parallel setting. We propose a unified approach that works for all Prufer-like codes and a more generic scheme based on the transformation of a tree into a functional digraph suitable for all bijective codes. Our results in this area close a variety of open problems. We also consider possible applications of tree encodings, discussing how to exploit these codes in Genetic Algorithms and in the generation of random trees. Moreover, we introduce a modified version of a known code that, in Genetic Algorithms, outperform all the other known codes. In the second part of the thesis we focus on two possible generalizations of our work. We first take into account the classes of k-trees and k-arch graphs (both superclasses of trees): we study bijective codes for this classes of graphs and their algorithmic feasibility. Then, we shift our attention to Informative Labeling Schemes. In this context labels are no longer considered as simple unique node identifiers, they rather convey information useful to achieve efficient computations on the tree. We exploit this idea to design a concurrent data structure for the lowest common ancestor problem on dynamic trees. We also present an experimental comparison between our labeling scheme and the one proposed by Peleg for static trees

    Book announcements

    Get PDF
    Podeu consultar la versió en castellà a: http://hdl.handle.net/11703/10236

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    What broke where for distributed and parallel applications — a whodunit story

    Get PDF
    Detection, diagnosis and mitigation of performance problems in today\u27s large-scale distributed and parallel systems is a difficult task. These large distributed and parallel systems are composed of various complex software and hardware components. When the system experiences some performance or correctness problem, developers struggle to understand the root cause of the problem and fix in a timely manner. In my thesis, I address these three components of the performance problems in computer systems. First, we focus on diagnosing performance problems in large-scale parallel applications running on supercomputers. We developed techniques to localize the performance problem for root-cause analysis. Parallel applications, most of which are complex scientific simulations running in supercomputers, can create up to millions of parallel tasks that run on different machines and communicate using the message passing paradigm. We developed a highly scalable and accurate automated debugging tool called PRODOMETER, which uses sophisticated algorithms to first, create a logical progress dependency graph of the tasks to highlight how the problem spread through the system manifesting as a system-wide performance issue. Second, uses this logical progress dependence graph to identify the task where the problem originated. Finally, PRODOMETER pinpoints the code region corresponding to the origin of the bug. Second, we developed a tool-chain that can detect performance anomaly using machine-learning techniques and can achieve very low false positive rate. Our input-aware performance anomaly detection system consists of a scalable data collection framework to collect performance related metrics from different granularity of code regions, an offline model creation and prediction-error characterization technique, and a threshold based anomaly-detection-engine for production runs. Our system requires few training runs and can handle unknown inputs and parameter combinations by dynamically calibrating the anomaly detection threshold according to the characteristics of the input data and the characteristics of the prediction-error of the models. Third, we developed performance problem mitigation scheme for erasure-coded distributed storage systems. Repair operations of the failed blocks in erasure-coded distributed storage system take really long time in networked constrained data-centers. The reason being, during the repair operation for erasure-coded distributed storage, a lot of data from multiple nodes are gathered into a single node and then a mathematical operation is performed to reconstruct the missing part. This process severely congests the links toward the destination where newly recreated data is to be hosted. We proposed a novel distributed repair technique, called Partial-Parallel-Repair (PPR) that performs this reconstruction in parallel on multiple nodes and eliminates network bottlenecks, and as a result, greatly speeds up the repair process. Fourth, we study how for a class of applications, performance can be improved (or performance problems can be mitigated) by selectively approximating some of the computations. For many applications, the main computation happens inside a loop that can be logically divided into a few temporal segments, we call phases. We found that while approximating the initial phases might severely degrade the quality of the results, approximating the computation for the later phases have very small impact on the final quality of the result. Based on this observation, we developed an optimization framework that for a given budget of quality-loss, would find the best approximation settings for each phase in the execution

    Decoding quantum errors with the Gottesman-Kitaev-Preskill code

    Get PDF
    Implementing quantum error correction has been difficult in practice. Techniques from engineering, computer science, coding theory, experimental and theoretical physics have been blended together to tackle this problem. Traditionally, quantum error correcting codes mostly focus on dealing with phenomenological Pauli errors primarily due to their theoretical convenience. But this approach neglects physical types of noise, which are more realistically modelled with physically motivated noise models. This work focuses on a specific encoding in quantum computing called the Gottesman-Kitaev Preskill (GKP) codes. Firstly we study the basic properties and quantum estimation capabilities of a closely related state that is symmetric in phase space called the grid sensor state, motivating the GKP code as a good candidate for physical qubit level error correction in quantum optics. The grid codes aim to correct for errors before they build up to become Pauli errors. Then, we propose a quantum error correction protocol for continuous-variable finite-energy, approximate GKP states undergoing small Gaussian random displacement errors, based on the scheme of Glancy and Knill [Phys. Rev. A 73, 012325 (2006)]. We show that combining multiple rounds of error-syndrome extraction with Bayesian estimation offers enhanced protection of GKP-encoded qubits over comparible single-round approaches. Furthermore, we show that the expected total displacement error incurred in multiple rounds of error followed by syndrome extraction is bounded by 2√π. Finally, we show that by recompiling the syndrome-extraction circuits, all the squeezing operations can be subsumed into auxiliary state preparation, reducing them to beamsplitter transformations and quadrature measurements.Open Acces

    OFDM techniques for multimedia data transmission

    Get PDF
    Orthogonal Frequency Division Multiplexing (OFDM) is an efficient parallel data transmission scheme that has relatively recently become popular in both wired and wireless communication systems for the transmission of multimedia data. OFDM can be found at the core of well known systems such as digital television/radio broadcasting, ADSL internet and wireless LANs. Research into the OFDM field continually looks at different techniques to attempt to make this type of transmission more efficient. More recent works in this area have considered the benefits of using wavelet transforms in place of the Fourier transforms traditionally used in OFDM systems and other works have looked at data compression as a method of increasing throughput in these types of transmission systems. The work presented in this thesis considers the transmission of image and video data in traditional OFDM transmission and discusses the strengths and weaknesses of this method. This thesis also proposes a new type of OFDM system that combines transmission and data compression into one block. By merging these two processes into one the complexity of the system is reduced, therefore promising to increase system efficiency. The results presented in this thesis show the novel compressive OFDM method performs well in channels with a low signal-to-noise ratio. Comparisons with traditional OFDM with lossy compression show a large improvement in the quality of the data received with the new system when used in these noisy channel environments. The results also show superior results are obtained when transmitting image and video data using the new method, the high correlative properties of images are ideal for effective transmission using the new technique. The new transmission technique proposed in this thesis also gives good results when considering computation time. When compared to MATLAB simulations of a traditional DFT-based OFDM system with a separate compression block, the proposed transmission method was able to reduce the computation time by between a half to three-quarters. This decrease in computational complexity also contributes to transmission efficiency when considering the new method
    corecore