4 research outputs found

    Parallel Real-Time Computation: Sometimes Quantity Means Quality

    Get PDF
    The primary purpose of parallel computation is the fast execution of computational tasks that require an inordinate amount of time to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. The thesis of this paper is that a second equally important motivation for using parallel computers exists. Specifically, the following question is posed: Can parallel computers, thanks to their multiple processors, do more than simply speed up the solution to a problem? We show that within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class, when computed in parallel, is far superior in quality than the best one obtained on a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, `better' means `closer to optimal' for optimization problems, `more secure' for cryptographic problems, and `more accurate' for numerical problems. Examples from these classes are presented. In each case, the solution obtained in parallel is significantly, provably, and consistently better than a sequential one. It is important to note that the purpose of this paper is not to demonstrate merely that a parallel computer can obtain a better solution to a computational problem than one derived sequentially. The latter is an interesting (and often surprising) observation in its own right, but we wish to go further. It is shown here that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer). This result is akin to superlinear speedup --- a phenomenon itself originally thought to be impossible

    The role of supersymmetry in the black hole/qubit correspondence

    No full text
    This thesis explores the numerous relationships between the entropy of black hole solutions in supergravity and the entanglement of multipartite systems in quantum information theory: the so-called black hole/qubit correspondence. We examine how, through the correspondence, the dyonic charges in the entropy of supersymmetric black hole solutions are directly matched to the state vector coefficients in the entanglement measures of their quantum information analogues. Moreover the Uduality invariance of the black hole entropy translates to the stochastic local operations and classical communication (SLOCC) invariance of the entanglement measures. Several examples are discussed, with the correspondence broadening when the supersymmetric classification of black holes is shown to match the entanglement classification of the qubit/qutrit analogues. On the microscopic front, we study the interpretation of D-brane wrapping configurations as real qubits/qutrits, including the matching of generating solutions on black hole and qubit sides. Tentative generalisations to other dimensions and qubit systems are considered. This is almost eclipsed by more recent developments linking the nilpotent U-duality orbit classi cation of black holes to the nilpotent classi cation of complex qubits. We provide preliminary results on the corresponding covariant classi cation. We explore the interesting parallel development of supersymmetric generalisations of qubits and entanglement, complete with two- and three-superqubit entanglement measures. Lastly, we briefly mention the supergravity technology of cubic Jordan algebras and Freudenthal triple systems (FTS), which are used to: 1) Relate FTS ranks to threequbit entanglement and compute SLOCC orbits. 2) Define new black hole dualities distinct from U-duality and related by a 4D/5D lift. 3) Clarify the state of knowledge of integral U-duality orbits in maximally extended supergravity in four, five, and six dimensions

    Procesamiento paralelo : Balance de carga dinámico en algoritmo de sorting

    Get PDF
    Algunas técnicas de sorting intentan balancear la carga mediante un muestreo inicial de los datos a ordenar y una distribución de los mismos de acuerdo a pivots. Otras redistribuyen listas parcialmente ordenadas de modo que cada procesador almacene un número aproximadamente igual de claves, y todos tomen parte del proceso de merge durante la ejecución. Esta Tesis presenta un nuevo método que balancea dinámicamente la carga basado en un enfoque diferente, buscando realizar una distribución del trabajo utilizando un estimador que permita predecir la carga de trabajo pendiente. El método propuesto es una variante de Sorting by Merging Paralelo, esto es, una técnica basada en comparación. Las ordenaciones en los bloques se realizan mediante el método de Burbuja o Bubble Sort con centinela. En este caso, el trabajo a realizar -en términos de comparaciones e intercambios- se encuentra afectada por el grado de desorden de los datos. Se estudió la evolución de la cantidad de trabajo en cada iteración del algoritmo para diferentes tipos de secuencias de entrada, n datos con valores de a n sin repetición, datos al azar con distribución normal, observándose que el trabajo disminuye en cada iteración. Esto se utilizó para obtener una estimación del trabajo restante esperado a partir de una iteración determinada, y basarse en el mismo para corregir la distribución de la carga. Con esta idea, el métoEs revisado por: http://sedici.unlp.edu.ar/handle/10915/9500Facultad de Ciencias Exacta
    corecore