1,323 research outputs found

    Tensor Network Methods for Quantum Phases

    Get PDF
    The physics that emerges when large numbers of particles interact can be complex and exotic. The collective behaviour may not re ect the underlying constituents, for example fermionic quasiparticles can emerge from models of interacting bosons. Due to this emergent complexity, manybody phenomena can be very challenging to study, but also very useful. A theoretical understanding of such systems is important for robust quantum information storage and processing. The emergent, macroscopic physics can be classi ed using the idea of a quantum phase. All models within a given phase exhibit similar low-energy emergent physics, which is distinct from that displayed by models in di erent phases. In this thesis, we utilise tensor networks to study many-body systems in a range of quantum phases. These include topologically ordered phases, gapless symmetry-protected phases, and symmetry-enriched topological phases

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    Reduced order modeling of distillation systems

    Get PDF
    The concept of distillation separation feasibility is investigated using reduced-order models. Three different models of nonequilibrium rate-based packed distillation columns are developed, each with progressive levels of complexity. The final model is the most complex, and is based on the Maxwell-Stefan theory of mass transfer. The first and second models are used as building blocks in the approach to the final model, as various simplifying assumptions are systematically relaxed. The models are all developed using orthogonal collocation. The order reduction properties of collocation are well documented. A low order model is desirable as the subsequent generation of data required for assessing the separation feasibility is fast. The first model is the simplest as constant molar overflow is assumed. This assumption is relaxed in the subsequent models. The second and third models differ in their respective mass and energy transfer. The second model uses a constant bulk phase approximation for an overall gas phase transfer coefficient. The third model uses rigorous Maxwell-Stefan mass transfer coefficients, which vary throughout the column. In all models, the bootstrap equation for the energy balance across the two-phase film is used after the appropriate modifications are made based on the system assumptions. Starting point solutions and minimum height and flows analysis are presented for all models. The first model is used to develop an azeotropic methodology for identifying and characterizing pinches. Different numerical techniques are also compared, and the accuracy of orthogonal collocation is verified. Ternary and pseudo McCabe-Thiele diagrams are used to represent the result$ for the multicomponent models 2 and 3. The results for models 2 and 3 are similar. This is expected as they differ only in the mass and heat transfer definitions. An argument is made for a specific definition of an objective function for models 2 and 3, which is subsequently used to generate separation surfaces. This function is defined such that there will always be a solution and for this reason is deemed superior to any alternatives. Feasible regions are identified using a grid projection of the relevant sections of the separation surfaces. The data set contained within the feasible region will be used in an optimizer in future work. In general, this work involves an understanding and application of the collocation mathematics to distillation systems. A further understanding of distillation systems, the associated mathematics and degrees of freedom is essential. A large section of this work is devoted to explaining and manipulating the available degrees of freedom, such that the desired end result of a feasible region for a specific separation can be obtained. Other complicating factors include the use of the collocation boundary conditions, and the relationship between these and the overall degrees of freedom for the system. In the literature, collocation is largely applied to staged columns. The resulting feed stage discontinuities are smoothed out using various interpolation routines. Both of these approaches are incorrect. It is shown that the use of collocation in staged columns is fundamentally flawed due to the underlying theory of staged distillation and the implications of collocation assumptions. Further, the feed discontinuities present in all the results are intrinsic features of the system and should be preserved. It is further concluded that Models 2 and 3 were correct in comparison with each other. Finally it was shown that the separation feasibility was successfully determined using the optimal objective function. This success was based on the accuracy and order reduction achieved through the use of collocation. Further work will involve optimizing the data found in the feasible region using Non-Linear Programming

    Research on a non-destructive fluidic storage control device

    Get PDF
    Fluidic memory device with associated fluidic alpha numerical displa

    Workshop Notes of the Seventh International Workshop "What can FCA do for Artificial Intelligence?"

    Get PDF
    International audienceThese are the proceedings of the seventh edition of the FCA4AI workshop (http://www.fca4ai.hse.ru/) co-located with the IJCAI 2019 Conference in Macao (China). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at classification and knowledge discovery that can be used for many purposes in Artificial Intelligence (AI). The objective of the FCA4AI workshop is to investigate two main issues: how can FCA supports various AI activities (knowledge discovery, knowledge engineering, machine learning, data mining, information retrieval, recommendation. . . ), and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domain

    Neonatal pain detection in videos using the iCOPEvid dataset and an ensemble of descriptors extracted from Gaussian of Local Descriptors

    Get PDF
    Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges

    Compositional Distributional Semantics with Compact Closed Categories and Frobenius Algebras

    Full text link
    This thesis contributes to ongoing research related to the categorical compositional model for natural language of Coecke, Sadrzadeh and Clark in three ways: Firstly, I propose a concrete instantiation of the abstract framework based on Frobenius algebras (joint work with Sadrzadeh). The theory improves shortcomings of previous proposals, extends the coverage of the language, and is supported by experimental work that improves existing results. The proposed framework describes a new class of compositional models that find intuitive interpretations for a number of linguistic phenomena. Secondly, I propose and evaluate in practice a new compositional methodology which explicitly deals with the different levels of lexical ambiguity (joint work with Pulman). A concrete algorithm is presented, based on the separation of vector disambiguation from composition in an explicit prior step. Extensive experimental work shows that the proposed methodology indeed results in more accurate composite representations for the framework of Coecke et al. in particular and every other class of compositional models in general. As a last contribution, I formalize the explicit treatment of lexical ambiguity in the context of the categorical framework by resorting to categorical quantum mechanics (joint work with Coecke). In the proposed extension, the concept of a distributional vector is replaced with that of a density matrix, which compactly represents a probability distribution over the potential different meanings of the specific word. Composition takes the form of quantum measurements, leading to interesting analogies between quantum physics and linguistics.Comment: Ph.D. Dissertation, University of Oxfor
    • …
    corecore