254 research outputs found
On the topology Of network fine structures
Multi-relational dynamics are ubiquitous in many complex systems like transportations, social and biological. This thesis studies the two mathematical objects that encapsulate these relationships --- multiplexes and interval graphs. The former is the modern outlook in Network Science to generalize the edges in graphs while the latter was popularized during the 1960s in Graph Theory.
Although multiplexes and interval graphs are nearly 50 years apart, their motivations are similar and it is worthwhile to investigate their structural connections and properties. This thesis look into these mathematical objects and presents their connections.
For example we will look at the community structures in multiplexes and learn how unstable the detection algorithms are. This can lead researchers to the wrong conclusions. Thus it is important to get formalism precise and this thesis shows that the complexity of interval graphs is an indicator to the precision. However this measure of complexity is a computational hard problem in Graph Theory and in turn we use a heuristic strategy from Network Science to tackle the problem.
One of the main contributions of this thesis is the compilation of the disparate literature on these mathematical objects. The novelty of this contribution is in using the statistical tools from population biology to deduce the completeness of this thesis's bibliography. It can also be used as a framework for researchers to quantify the comprehensiveness of their preliminary investigations.
From the large body of multiplex research, the thesis focuses on the statistical properties of the projection of multiplexes (the reduction of multi-relational system to a single relationship network). It is important as projection is always used as the baseline for many relevant algorithms and its topology is insightful to understand the dynamics of the system.Open Acces
Unitary transformations for quantum computing
The last two decades have seen an enormous increase in the computational power of digital computers. This was due to the rapid technical development in manufacturing processes and controlling semiconducting structures on submicron scale. Concurrently, the electric circuits have encountered the first signs of the realm of quantum mechanics. Those effects may induce noise and thus they are typically considered harmful. However, the manipulation of the coherent quantum states might turn out be the basis of powerful computers – quantum computers. There, the computation is encoded into the unitary time evolution of a quantum mechanical state vector. Eventually, quantum mechanics could enable one, for example, to read secret electronic messages which are encrypted by the widely employed RSA cryptosystem – a task which is extremely laborious for the current digital computers.
This thesis presents a theoretical study of the coherent manipulations of pure quantum states in a quantum register, that is, quantum algorithms. An implementation of a quantum algorithm involves the initialization of the input state and its manipulation with quantum gates followed by the measurements. The physical implementation of each gate requires that it is decomposed into low-level gates whose physical realizations are explicitly known. Here, the problem is examined from two directions. Firstly, the numerical optimization scheme for controlling time-evolution of a closed quantum system is discussed. This yields a method for implementing quantum gates acting on up to three quantum bits, qubits. The approach is independent of the physical realization of the quantum computer, but it is considered explicitly for a proposed inductively coupled Josephson charge qubit register. Secondly, the techniques of numerical matrix computation are utilized to find a general method for decomposing an arbitrary n-qubit gate into a sequence of elementary gates, which act on one or two qubits.
The results of this thesis help to improve the implementation of quantum algorithms. The quantum circuit construction developed in the thesis is the first one to achieve the asymptotically minimal complexity in the number of elementary gates. In context of acceleration of quantum algorithms we present a gate-level study of Shor's algorithm and show how to accelerate the algorithm by merging several elementary gates into multiqubit gates. Finally, the requirements set by the resulting gate array are compared to the properties of superconducting qubits. This allows us to discuss the feasibility of the Josephson charge qubit register, for instance, as hardware for breaking the RSA cryptosystem.reviewe
Graph-based techniques for compression and reconstruction of sparse sources
The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory.
In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited.
The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible.
Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them.
The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible.
In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one.
Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mÃnimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilÃstico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unÃvocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.Postprint (published version
Continuous-time quantum computing
Quantum computation using continuous-time evolution under a natural hardware Hamiltonian is a promising near- and mid-term direction toward powerful quantum computing hardware.
Continuous-time quantum computing (CTQC) encompasses continuous-time quantum walk computing (QW), adiabatic quantum computing (AQC), and quantum annealing (QA), as well as other strategies which contain elements of these three.
While much of current quantum computing research focuses on the discrete-time gate model, which has an appealing similarity to the discrete logic of classical computation, the continuous nature of quantum information suggests that continuous-time quantum information processing is worth exploring.
A versatile context for CTQC is the transverse Ising model, and this thesis will explore the application of Ising model CTQC to classical optimization problems.
Classical optimization problems have industrial and scientific significance, including in logistics, scheduling, medicine, cryptography, hydrology and many other areas.
Along with the fact that such problems often have straightforward, natural mappings onto the interactions of readily-available Ising model hardware makes classical optimization a fruitful target for CTQC algorithms.
After introducing and explaining the CTQC framework in detail, in this thesis I will, through a combination of numerical, analytical, and experimental work, examine the performance of various forms of CTQC on a number of different optimization problems, and investigate the underlying physical mechanisms by which they operate.Open Acces
- …