40,413 research outputs found

    Building Blocks of Topological Quantum Chemistry: Elementary Band Representations

    Full text link
    The link between chemical orbitals described by local degrees of freedom and band theory, which is defined in momentum space, was proposed by Zak several decades ago for spinless systems with and without time-reversal in his theory of "elementary" band representations. In Nature 547, 298-305 (2017), we introduced the generalization of this theory to the experimentally relevant situation of spin-orbit coupled systems with time-reversal symmetry and proved that all bands that do not transform as band representations are topological. Here, we give the full details of this construction. We prove that elementary band representations are either connected as bands in the Brillouin zone and are described by localized Wannier orbitals respecting the symmetries of the lattice (including time-reversal when applicable), or, if disconnected, describe topological insulators. We then show how to generate a band representation from a particular Wyckoff position and determine which Wyckoff positions generate elementary band representations for all space groups. This theory applies to spinful and spinless systems, in all dimensions, with and without time reversal. We introduce a homotopic notion of equivalence and show that it results in a finer classification of topological phases than approaches based only on the symmetry of wavefunctions at special points in the Brillouin zone. Utilizing a mapping of the band connectivity into a graph theory problem, which we introduced in Nature 547, 298-305 (2017), we show in companion papers which Wyckoff positions can generate disconnected elementary band representations, furnishing a natural avenue for a systematic materials search.Comment: 15+9 pages, 4 figures; v2: minor corrections; v3: updated references (published version

    Evolutionary Computation in High Energy Physics

    Get PDF
    Evolutionary Computation is a branch of computer science with which, traditionally, High Energy Physics has fewer connections. Its methods were investigated in this field, mainly for data analysis tasks. These methods and studies are, however, less known in the high energy physics community and this motivated us to prepare this lecture. The lecture presents a general overview of the main types of algorithms based on Evolutionary Computation, as well as a review of their applications in High Energy Physics.Comment: Lecture presented at 2006 Inverted CERN School of Computing; to be published in the school proceedings (CERN Yellow Report

    New Supersymmetric Localizations from Topological Gravity

    Full text link
    Supersymmetric field theories can be studied exactly on suitable off-shell supergravity backgrounds. We show that in two dimensions such backgrounds are identifiable with BRST invariant backgrounds of topological gravity coupled to an abelian topological gauge multiplet. This latter background is required for the consistent coupling of the topological `matter' YM theory to topological gravity. We make use of this topological point of view to obtain, in a simple and straightforward way, a complete classification of localizing supersymmetric backgrounds in two dimensions. The BRST invariant topological backgrounds are parametrized by both Killing vectors and S1S^1-equivariant cohomology of the 2-dimensional world-sheet. We reconstruct completely the supergravity backgrounds from the topological data: some of the supergravity fields are twisted versions of the topological backgrounds, but others are "composite", i.e. they are non-linear functionals of them. We recover all the known localizing 2-dimensional backgrounds and (infinitely) many more that have not been explored so far. We show that the supersymmetric Ω\Omega-deformation is nothing but the background value of the ghost-for-ghost of topological gravity, a result which holds for other dimensions too. The new localizing backgrounds are characterized by non-trivial fluxes for both the graviphotons of the supergravity multiplet.Comment: 45 pages, 2 figures, revised introduction, version published on JHE

    Generic singularities of symplectic and quasi-symplectic immersions

    Full text link
    For any k<2n we construct a complete system of invariants in the problem of classifying singularities of immersed k-dimensional submanifolds of a symplectic 2n-manifold at a generic double point.Comment: 12 page

    ERBlox: Combining Matching Dependencies with Machine Learning for Entity Resolution

    Full text link
    Entity resolution (ER), an important and common data cleaning problem, is about detecting data duplicate representations for the same external entities, and merging them into single representations. Relatively recently, declarative rules called "matching dependencies" (MDs) have been proposed for specifying similarity conditions under which attribute values in database records are merged. In this work we show the process and the benefits of integrating four components of ER: (a) Building a classifier for duplicate/non-duplicate record pairs built using machine learning (ML) techniques; (b) Use of MDs for supporting the blocking phase of ML; (c) Record merging on the basis of the classifier results; and (d) The use of the declarative language "LogiQL" -an extended form of Datalog supported by the "LogicBlox" platform- for all activities related to data processing, and the specification and enforcement of MDs.Comment: Final journal version, with some minor technical corrections. Extended version of arXiv:1508.0601

    An Architecture-Altering and Training Methodology for Neural Logic Networks: Application in the Banking Sector

    Get PDF
    Artificial neural networks have been universally acknowledged for their ability on constructing forecasting and classifying systems. Among their desirable features, it has always been the interpretation of their structure, aiming to provide further knowledge for the domain experts. A number of methodologies have been developed for this reason. One such paradigm is the neural logic networks concept. Neural logic networks have been especially designed in order to enable the interpretation of their structure into a number of simple logical rules and they can be seen as a network representation of a logical rule base. Although powerful by their definition in this context, neural logic networks have performed poorly when used in approaches that required training from data. Standard training methods, such as the back-propagation, require the network’s synapse weight altering, which destroys the network’s interpretability. The methodology in this paper overcomes these problems and proposes an architecture-altering technique, which enables the production of highly antagonistic solutions while preserving any weight-related information. The implementation involves genetic programming using a grammar-guided training approach, in order to provide arbitrarily large and connected neural logic networks. The methodology is tested in a problem from the banking sector with encouraging results
    • 

    corecore