95 research outputs found

    29th International Symposium on Algorithms and Computation: ISAAC 2018, December 16-19, 2018, Jiaoxi, Yilan, Taiwan

    Get PDF

    Computação quântica : autômatos, jogos e complexidade

    Get PDF
    Orientador: Arnaldo Vieira MouraDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Desde seu surgimento, Teoria da Computação tem lidado com modelos computacionais de maneira matemática e abstrata. A noção de computação eficiente foi investigada usando esses modelos sem procurar entender as capacidades e limitações inerentes ao mundo físico. A Computação Quântica representa uma ruptura com esse paradigma. Enraizada nos postulados da Mecânica Quântica, ela é capaz de atribuir um sentido físico preciso à computação segundo nosso melhor entendimento da natureza. Esses postulados dão origem a propriedades fundamentalmente diferentes, uma em especial, chamada emaranhamento, é de importância central para computação e processamento de informação. O emaranhamento captura uma noção de correlação que é única a modelos quânticos. Essas correlações quânticas podem ser mais fortes do que qualquer correlação clássica estando dessa forma no coração de algumas capacidades quânticas que vão além do clássico. Nessa dissertação, nós investigamos o emaranhamento da perspectiva da complexidade computacional quântica. Mais precisamente, nós estudamos uma classe bem conhecida, definida em termos de verificação de provas, em que um verificador tem acesso à múltiplas provas não emaranhadas (QMA(k)). Assumir que as provas não contêm correlações quânticas parece ser uma hipótese não trivial, potencialmente fazendo com que essa classe seja maior do que aquela em que há apenas uma prova. Contudo, encontrar cotas de complexidade justas para QMA(k) permanece uma questão central sem resposta por mais de uma década. Nesse contexto, nossa contribuição é tripla. Primeiramente, estudamos classes relacionadas mostrando como alguns recursos computacionais podem afetar seu poder de forma a melhorar a compreensão a respeito da própria classe QMA(k). Em seguida, estabelecemos uma relação entre Probabilistically Checkable Proofs (PCP) clássicos e QMA(k). Isso nos permite recuperar resultados conhecidos de maneira unificada e simplificada. Para finalizar essa parte, mostramos que alguns caminhos para responder essa questão em aberto estão obstruídos por dificuldades computacionais. Em um segundo momento, voltamos nossa atenção para modelos restritos de computação quântica, mais especificamente, autômatos quânticos finitos. Um modelo conhecido como Two-way Quantum Classical Finite Automaton (2QCFA) é o objeto principal de nossa pesquisa. Seu estudo tem o intuito de revelar o poder computacional provido por memória quântica de dimensão finita. Nos estendemos esse autômato com a capacidade de colocar um número finito de marcadores na fita de entrada. Para qualquer número de marcadores, mostramos que essa extensão é mais poderosa do que seus análogos clássicos determinístico e probabilístico. Além de trazer avanços em duas linhas complementares de pesquisa, essa dissertação provê uma vasta exposição a ambos os campos: complexidade computacional e autômatosAbstract: Since its inception, Theoretical Computer Science has dealt with models of computation primarily in a very abstract and mathematical way. The notion of efficient computation was investigated using these models mainly without seeking to understand the inherent capabilities and limitations of the actual physical world. In this regard, Quantum Computing represents a rupture with respect to this paradigm. Rooted on the postulates of Quantum Mechanics, it is able to attribute a precise physical notion to computation as far as our understanding of nature goes. These postulates give rise to fundamentally different properties one of which, namely entanglement, is of central importance to computation and information processing tasks. Entanglement captures a notion of correlation unique to quantum models. This quantum correlation can be stronger than any classical one, thus being at the heart of some quantum super-classical capabilities. In this thesis, we investigate entanglement from the perspective of quantum computational complexity. More precisely, we study a well known complexity class, defined in terms of proof verification, in which a verifier has access to multiple unentangled quantum proofs (QMA(k)). Assuming the proofs do not exhibit quantum correlations seems to be a non-trivial hypothesis, potentially making this class larger than the one in which only a single proof is given. Notwithstanding, finding tight complexity bounds for QMA(k) has been a central open question in quantum complexity for over a decade. In this context, our contributions are threefold. Firstly, we study closely related classes showing how computational resources may affect its power in order to shed some light on \QMA(k) itself. Secondly, we establish a relationship between classical Probabilistically Checkable Proofs and QMA(k) allowing us to recover known results in unified and simplified way, besides exposing the interplay between them. Thirdly, we show that some paths to settle this open question are obstructed by computational hardness. In a second moment, we turn our attention to restricted models of quantum computation, more specifically, quantum finite automata. A model known as Two-way Quantum Classical Finite Automaton (2QCFA) is the main object of our inquiry. Its study is intended to reveal the computational power provided by finite dimensional quantum memory. We extend this automaton with the capability of placing a finite number of markers in the input tape. For any number of markers, we show that this extension is more powerful than its classical deterministic and probabilistic analogues. Besides bringing advances to these two complementary lines of inquiry, this thesis also provides a vast exposition to both subjects: computational complexity and automata theoryMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Parallel Multiscale Contact Dynamics for Rigid Non-spherical Bodies

    Get PDF
    The simulation of large numbers of rigid bodies of non-analytical shapes or vastly varying sizes which collide with each other is computationally challenging. The fundamental problem is the identification of all contact points between all particles at every time step. In the Discrete Element Method (DEM), this is particularly difficult for particles of arbitrary geometry that exhibit sharp features (e.g. rock granulates). While most codes avoid non-spherical or non-analytical shapes due to the computational complexity, we introduce an iterative-based contact detection method for triangulated geometries. The new method is an improvement over a naive brute force approach which checks all possible geometric constellations of contact and thus exhibits a lot of execution branching. Our iterative approach has limited branching and high floating point operations per processed byte. It thus is suitable for modern Single Instruction Multiple Data (SIMD) CPU hardware. As only the naive brute force approach is robust and always yields a correct solution, we propose a hybrid solution that combines the best of the two worlds to produce fast and robust contacts. In terms of the DEM workflow, we furthermore propose a multilevel tree-based data structure strategy that holds all particles in the domain on multiple scales in grids. Grids reduce the total computational complexity of the simulation. The data structure is combined with the DEM phases to form a single touch tree-based traversal that identifies both contact points between particle pairs and introduces concurrency to the system during particle comparisons in one multiscale grid sweep. Finally, a reluctant adaptivity variant is introduced which enables us to realise an improved time stepping scheme with larger time steps than standard adaptivity while we still minimise the grid administration overhead. Four different parallelisation strategies that exploit multicore architectures are discussed for the triad of methodological ingredients. Each parallelisation scheme exhibits unique behaviour depending on the grid and particle geometry at hand. The fusion of them into a task-based parallelisation workflow yields promising speedups. Our work shows that new computer architecture can push the boundary of DEM computability but this is only possible if the right data structures and algorithms are chosen

    Self Assembly Problems of Anisotropic Particles in Soft Matter.

    Full text link
    Anisotropic building blocks assembled from colloidal particles are attractive building blocks for self-assembled materials because their complex interactions can be exploited to drive self-assembly. In this dissertation we address the self-assembly of anisotropic particles from multiple novel computational and mathematical angles. First, we accelerate algorithms for modeling systems of anisotropic particles via massively parallel GPUs. We provide a scheme for generating statistically robust pseudo-random numbers that enables GPU acceleration of Brownian and dissipative particle dynamics. We also show how rigid body integration can be accelerated on a GPU. Integrating these two algorithms into a GPU-accelerated molecular dynamics code (HOOMD-blue), make a single GPU the ideal computing environment for modeling the self-assembly of anisotropic nanoparticles. Second, we introduce a new mathematical optimization problem, filling, a hybrid of the familiar shape packing and covering problem, which can be used to model shaped particles. We study the rich mathematical structures of the solution space and provide computational methods for finding optimal solutions for polygons and convex polyhedra. We present a sequence of isosymmetric optimal filling solutions for the Platonic solids. We then consider the filling of a hyper-cone in dimensions two to eight and show the solution remains scale-invariant but dependent on dimension. Third, we study the impact of size variation, polydispersity, on the self-assembly of an anisotropic particle, the polymer-tethered nanosphere, into ordered phases. We show that the local nanoparticle packing motif, icosahedral or crystalline, determines the impact of polydispersity on energy of the system and phase transitions. We show how extensions of the Voronoi tessellation can be calculated and applied to characterize such micro-segregated phases. By applying a Voronoi tessellation, we show that properties of the individual domains can be studied as a function of system properties such as temperature and concentration. Last, we consider the thermodynamically driven self-assembly of terminal clusters of particles. We predict that clusters related to spherical codes, a mathematical sequence of points, can be synthesized via self-assembly. These anisotropic clusters can be tuned to different anisotropies via the ratio of sphere diameters and temperature. The method suggests a rich new way for assembling anisotropic building blocks.Ph.D.Applied Physics and Scientific ComputingUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91576/1/phillicl_1.pd

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    The Murray Ledger and Times, February 20, 1976

    Get PDF

    Play Among Books

    Get PDF
    How does coding change the way we think about architecture? Miro Roman and his AI Alice_ch3n81 develop a playful scenario in which they propose coding as the new literacy of information. They convey knowledge in the form of a project model that links the fields of architecture and information through two interwoven narrative strands in an “infinite flow” of real books
    corecore