787 research outputs found

    Acta Cybernetica : Tomus 6. Fasciculus 4.

    Get PDF

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF

    A New Method for Efficient Parallel Solution of Large Linear Systems on a SIMD Processor.

    Get PDF
    This dissertation proposes a new technique for efficient parallel solution of very large linear systems of equations on a SIMD processor. The model problem used to investigate both the efficiency and applicability of the technique was of a regular structure with semi-bandwidth β,\beta, and resulted from approximation of a second order, two-dimensional elliptic equation on a regular domain under the Dirichlet and periodic boundary conditions. With only slight modifications, chiefly to properly account for the mathematical effects of varying bandwidths, the technique can be extended to encompass solution of any regular, banded systems. The computational model used was the MasPar MP-X (model 1208B), a massively parallel processor hostnamed hurricane and housed in the Concurrent Computing Laboratory of the Physics/Astronomy department, Louisiana State University. The maximum bandwidth which caused the problem\u27s size to fit the nyproc ×\times nxproc machine array exactly, was determined. This as well as smaller sizes were used in four experiments to evaluate the efficiency of the new technique. Four benchmark algorithms, two direct--Gauss elimination (GE), Orthogonal factorization--and two iterative--symmetric over-relaxation (SOR) (ω\omega = 2), the conjugate gradient method (CG)--were used to test the efficiency of the new approach based upon three evaluation metrics--deviations of results of computations, measured as average absolute errors, from the exact solution, the cpu times, and the mega flop rates of executions. All the benchmarks, except the GE, were implemented in parallel. In all evaluation categories, the new approach outperformed the benchmarks and very much so when N \gg p, p being the number of processors and N the problem size. At the maximum system\u27s size, the new method was about 2.19 more accurate, and about 1.7 times faster than the benchmarks. But when the system size was a lot smaller than the machine\u27s size, the new approach\u27s performance deteriorated precipitously, and, in fact, in this circumstance, its performance was worse than that of GE, the serial code. Hence, this technique is recommended for solution of linear systems with regular structures on array processors when the problem\u27s size is large in relation to the processor\u27s size

    Acta Cybernetica : Tomus 5. Fasciculus 1.

    Get PDF

    Specification of Software Architecture Reconfiguration

    Get PDF
    In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules

    Acta Cybernetica : Tomus 4. Fasciculus 1.

    Get PDF

    Quantum stabilizer codes and beyond

    Get PDF
    The importance of quantum error correction in paving the way to build a practical quantum computer is no longer in doubt. This dissertation makes a threefold contribution to the mathematical theory of quantum error-correcting codes. Firstly, it extends the framework of an important class of quantum codes -- nonbinary stabilizer codes. It clarifies the connections of stabilizer codes to classical codes over quadratic extension fields, provides many new constructions of quantum codes, and develops further the theory of optimal quantum codes and punctured quantum codes. Secondly, it contributes to the theory of operator quantum error correcting codes also called as subsystem codes. These codes are expected to have efficient error recovery schemes than stabilizer codes. This dissertation develops a framework for study and analysis of subsystem codes using character theoretic methods. In particular, this work establishes a close link between subsystem codes and classical codes showing that the subsystem codes can be constructed from arbitrary classical codes. Thirdly, it seeks to exploit the knowledge of noise to design efficient quantum codes and considers more realistic channels than the commonly studied depolarizing channel. It gives systematic constructions of asymmetric quantum stabilizer codes that exploit the asymmetry of errors in certain quantum channels.Comment: Ph.D. Dissertation, Texas A&M University, 200

    Simulation and control of stationary crossflow vortices

    Get PDF
    Turbulent flow and transition are some of the most important phenomena of fluid mechanics and aerodynamics and represent a challenging engineering problem for aircraft manufacturers looking to improve aerodynamic efficiency. Laminar flow technology has the potential to provide a significant reduction to aircraft drag by manipulating the instabilities within the laminar boundary layer to achieve a delay in transition to turbulence. Currently prediction and simulation of laminar-turbulent transition is con- ducted using either a low-fidelity approach involving the stability equations or via a full Direct Numerical Simulation (DNS). The work in this thesis uses an alternative high-fidelity simulation method that aims to bridge the gap between the two simulation streams. The methodology uses an LES approach with a low-computational cost sub-grid scale model (WALE) that has inherent ability to reduce its turbulent viscosity contribution to zero in laminar regions. With careful grid spacing the laminar regions can be explicitly modelled as an unsteady Navier-Stokes simulation while the turbulent and transitional regions are simulated using LES. The methodology has been labelled as an unsteady Navier-Stokes/Large Eddy Simulation (UNS/LES) approach. Two test cases were developed to test the applicability of the method to simulate and control the crossflow instability. The first test case replicated the setup from an experiment that ran at a chord-based Reynolds number of 390, 000. Two methods were used to generate the initial disturbance for the crossflow vortices, firstly using a continuous suction hole and secondly an isolated roughness element. The results for this test case showed that the approach was capable of modelling the full transition process, from explicitly modelling the growth of the initial amplitude of the disturbances to final breakdown to turbulence. Results matched well with the available experimental data. The second test case replicated an experimental setup using a custom- designed aerofoil run at a chord-based Reynolds number of 2.4 million. The test case used Distributed Roughness Elements (DRE) to induce crossflow vortices at both a critical and a control wavelength. By forcing the crossflow vortices at a stable (control) wavelength a delay in laminar-turbulent transition can be achieved. The results showed that the UNS/LES approach was capable of capturing the initial disturbance amplitudes due to the roughness elements and their growth rates matched well with experimental data. Finally, downstream a transitional region was assessed with low-freestream turbulence provided using a modified Synthetic Eddy Method (SEM). The full laminar-turbulent transition pro- cess was simulated and results showed significant promise. In conclusion, the method employed in this thesis showed promising results and demonstrated a possible route to high-fidelity transition simulation run at more realistic flow conditions and geometries than DNS. Further work and validation is required to test the secondary instability region and the final breakdown to turbulence

    Quantum error control codes

    Get PDF
    It is conjectured that quantum computers are able to solve certain problems more quickly than any deterministic or probabilistic computer. For instance, Shor's algorithm is able to factor large integers in polynomial time on a quantum computer. A quantum computer exploits the rules of quantum mechanics to speed up computations. However, it is a formidable task to build a quantum computer, since the quantum mechanical systems storing the information unavoidably interact with their environment. Therefore, one has to mitigate the resulting noise and decoherence effects to avoid computational errors. In this dissertation, I study various aspects of quantum error control codes - the key component of fault-tolerant quantum information processing. I present the fundamental theory and necessary background of quantum codes and construct many families of quantum block and convolutional codes over finite fields, in addition to families of subsystem codes. This dissertation is organized into three parts: Quantum Block Codes. After introducing the theory of quantum block codes, I establish conditions when BCH codes are self-orthogonal (or dual-containing) with respect to Euclidean and Hermitian inner products. In particular, I derive two families of nonbinary quantum BCH codes using the stabilizer formalism. I study duadic codes and establish the existence of families of degenerate quantum codes, as well as families of quantum codes derived from projective geometries. Subsystem Codes. Subsystem codes form a new class of quantum codes in which the underlying classical codes do not need to be self-orthogonal. I give an introduction to subsystem codes and present several methods for subsystem code constructions. I derive families of subsystem codes from classical BCH and RS codes and establish a family of optimal MDS subsystem codes. I establish propagation rules of subsystem codes and construct tables of upper and lower bounds on subsystem code parameters. Quantum Convolutional Codes. Quantum convolutional codes are particularly well-suited for communication applications. I develop the theory of quantum convolutional codes and give families of quantum convolutional codes based on RS codes. Furthermore, I establish a bound on the code parameters of quantum convolutional codes - the generalized Singleton bound. I develop a general framework for deriving convolutional codes from block codes and use it to derive families of non-catastrophic quantum convolutional codes from BCH codes. The dissertation concludes with a discussion of some open problems

    Advanced constellation and demapper schemes for next generation digital terrestrial television broadcasting systems

    Get PDF
    206 p.Esta tesis presenta un nuevo tipo de constelaciones llamadas no uniformes. Estos esquemas presentan una eficacia de hasta 1,8 dB superior a las utilizadas en los últimos sistemas de comunicaciones de televisión digital terrestre y son extrapolables a cualquier otro sistema de comunicaciones (satélite, móvil, cable¿). Además, este trabajo contribuye al diseño de constelaciones con una nueva metodología que reduce el tiempo de optimización de días/horas (metodologías actuales) a horas/minutos con la misma eficiencia. Todas las constelaciones diseñadas se testean bajo una plataforma creada en esta tesis que simula el estándar de radiodifusión terrestre más avanzado hasta la fecha (ATSC 3.0) bajo condiciones reales de funcionamiento.Por otro lado, para disminuir la latencia de decodificación de estas constelaciones esta tesis propone dos técnicas de detección/demapeo. Una es para constelaciones no uniformes de dos dimensiones la cual disminuye hasta en un 99,7% la complejidad del demapeo sin empeorar el funcionamiento del sistema. La segunda técnica de detección se centra en las constelaciones no uniformes de una dimensión y presenta hasta un 87,5% de reducción de la complejidad del receptor sin pérdidas en el rendimiento.Por último, este trabajo expone un completo estado del arte sobre tipos de constelaciones, modelos de sistema, y diseño/demapeo de constelaciones. Este estudio es el primero realizado en este campo
    corecore