637 research outputs found

    Polynomial growth of concept lattices, canonical bases and generators:: extremal set theory in Formal Concept Analysis

    Get PDF
    We prove that there exist three distinct, comprehensive classes of (formal) contexts with polynomially many concepts. Namely: contexts which are nowhere dense, of bounded breadth or highly convex. Already present in G. Birkhoff's classic monograph is the notion of breadth of a lattice; it equals the number of atoms of a largest boolean suborder. Even though it is natural to define the breadth of a context as being that of its concept lattice, this idea had not been exploited before. We do this and establish many equivalences. Amongst them, it is shown that the breadth of a context equals the size of its largest minimal generator, its largest contranominal-scale subcontext, as well as the Vapnik-Chervonenkis dimension of both its system of extents and of intents. The polynomiality of the aforementioned classes is proven via upper bounds (also known as majorants) for the number of maximal bipartite cliques in bipartite graphs. These are results obtained by various authors in the last decades. The fact that they yield statements about formal contexts is a reward for investigating how two established fields interact, specifically Formal Concept Analysis (FCA) and graph theory. We improve considerably the breadth bound. Such improvement is twofold: besides giving a much tighter expression, we prove that it limits the number of minimal generators. This is strictly more general than upper bounding the quantity of concepts. Indeed, it automatically implies a bound on these, as well as on the number of proper premises. A corollary is that this improved result is a bound for the number of implications in the canonical basis too. With respect to the quantity of concepts, this sharper majorant is shown to be best possible. Such fact is established by constructing contexts whose concept lattices exhibit exactly that many elements. These structures are termed, respectively, extremal contexts and extremal lattices. The usual procedure of taking the standard context allows one to work interchangeably with either one of these two extremal structures. Extremal lattices are equivalently defined as finite lattices which have as many elements as possible, under the condition that they obey two upper limits: one for its number of join-irreducibles, other for its breadth. Subsequently, these structures are characterized in two ways. Our first characterization is done using the lattice perspective. Initially, we construct extremal lattices by the iterated operation of finding smaller, extremal subsemilattices and duplicating their elements. Then, it is shown that every extremal lattice must be obtained through a recursive application of this construction principle. A byproduct of this contribution is that extremal lattices are always meet-distributive. Despite the fact that this approach is revealing, the vicinity of its findings contains unanswered combinatorial questions which are relevant. Most notably, the number of meet-irreducibles of extremal lattices escapes from control when this construction is conducted. Aiming to get a grip on the number of meet-irreducibles, we succeed at proving an alternative characterization of these structures. This second approach is based on implication logic, and exposes an interesting link between number of proper premises, pseudo-extents and concepts. A guiding idea in this scenario is to use implications to construct lattices. It turns out that constructing extremal structures with this method is simpler, in the sense that a recursive application of the construction principle is not needed. Moreover, we obtain with ease a general, explicit formula for the Whitney numbers of extremal lattices. This reveals that they are unimodal, too. Like the first, this second construction method is shown to be characteristic. A particular case of the construction is able to force - with precision - a high number of (in the sense of "exponentially many'') meet-irreducibles. Such occasional explosion of meet-irreducibles motivates a generalization of the notion of extremal lattices. This is done by means of considering a more refined partition of the class of all finite lattices. In this finer-grained setting, each extremal class consists of lattices with bounded breadth, number of join irreducibles and meet-irreducibles as well. The generalized problem of finding the maximum number of concepts reveals itself to be challenging. Instead of attempting to classify these structures completely, we pose questions inspired by Turán's seminal result in extremal combinatorics. Most prominently: do extremal lattices (in this more general sense) have the maximum permitted breadth? We show a general statement in this setting: for every choice of limits (breadth, number of join-irreducibles and meet-irreducibles), we produce some extremal lattice with the maximum permitted breadth. The tools which underpin all the intuitions in this scenario are hypergraphs and exact set covers. In a rather unexpected, but interesting turn of events, we obtain for free a simple and interesting theorem about the general existence of "rich'' subcontexts. Precisely: every context contains an object/attribute pair which, after removed, results in a context with at least half the original number of concepts

    Identifying Non-Sublattice Equivalence Classes Induced by an Attribute Reduction in FCA

    Get PDF
    The detection of redundant or irrelevant variables (attributes) in datasets becomes essential in different frameworks, such as in Formal Concept Analysis (FCA). However, removing such variables can have some impact on the concept lattice, which is closely related to the algebraic structure of the obtained quotient set and their classes. This paper studies the algebraic structure of the induced equivalence classes and characterizes those classes that are convex sublattices of the original concept lattice. Particular attention is given to the reductions removing FCA's unnecessary attributes. The obtained results will be useful to other complementary reduction techniques, such as the recently introduced procedure based on local congruences

    VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts

    Get PDF
    The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), Covilhã, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Proceedings of the 5th International Workshop "What can FCA do for Artificial Intelligence?", FCA4AI 2016(co-located with ECAI 2016, The Hague, Netherlands, August 30th 2016)

    Get PDF
    International audienceThese are the proceedings of the fifth edition of the FCA4AI workshop (http://www.fca4ai.hse.ru/). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification that can be used for many purposes, especially for Artificial Intelligence (AI) needs. The objective of the FCA4AI workshop is to investigate two main main issues: how can FCA support various AI activities (knowledge discovery, knowledge representation and reasoning, learning, data mining, NLP, information retrieval), and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domain. Accordingly, topics of interest are related to the following: (i) Extensions of FCA for AI: pattern structures, projections, abstractions. (ii) Knowledge discovery based on FCA: classification, data mining, pattern mining, functional dependencies, biclustering, stability, visualization. (iii) Knowledge processing based on concept lattices: modeling, representation, reasoning. (iv) Application domains: natural language processing, information retrieval, recommendation, mining of web of data and of social networks, etc

    Vertex Operators and Modular Forms

    Full text link
    The leitmotif of these Notes is the idea of a vertex operator algebra (VOA) and the relationship between VOAs and elliptic functions and modular forms. This is to some extent analogous to the relationship between a finite group and its irreducible characters; the algebraic structure determines a set of numerical invariants, and arithmetic properties of the invariants provides feedback in the form of restrictions on the algebraic structure. One of the main points of these Notes is to explain how this works, and to give some reasonably interesting examples.Comment: 118 pages. These are notes based on a series of lectures at the Graduate Workshop " A Window into Zeta and Modular Physics " at MSRI, Berkeley, June 2008 http://www.msri.org/calendar/sgw/WorkshopInfo/449/show_sgw. Submitted to Mathematical Sciences Research Institute Publication

    Congruencias y factorización como herramientas de reducción en el análisis de conceptos formales

    Get PDF
    Desde su introducción a principios de los años ochenta por B. Ganter y R. Wille, el Análisis de Conceptos Formales (FCA, de sus siglas en inglés) ha sido una de las herramientas matemáticas para el análisis de datos que más desarrollo ha experimentado. El FCA es una teoría matemática que determina estructuras conceptuales entre conjuntos de datos. En particular, las bases de datos se interpretan formalmente en esta teoría con la noción de contexto, que viene determinado por un conjunto de objetos, un conjunto de atributos y una relación entre ambos conjuntos. Las herramientas que proporciona el FCA permiten manipular adecuadamente los datos y extraer información relevante de ellos. Una de las líneas de investigación con más importancia es la reducción del conjunto de atributos que contienen estos conjuntos de datos, preservando la información esencial y eliminando la redundancia que puedan contener. La reducción de atributos también ha sido estudiada en otros ambientes, como en la Teoría de Conjuntos Rugosos, así como en las distintas generalizaciones difusas de ambas teorías. En el FCA, se ha demostrado que cuando se lleva a cabo una reducción de atributos de un contexto formal, se induce una relación de equivalencia sobre el conjunto de conceptos del contexto original. Esta relación de equivalencia inducida tiene una particularidad, sus clases de equivalencia tienen una estructura de semirretículo superior con un elemento máximo, es decir, no forman estructuras algebraicas cerradas, en general. En esta tesis estudiamos cómo es posible complementar las reducciones de atributos dotando a las clases de equivalencia con una estructura algebraica cerrada. La noción de congruencia consigue este propósito, sin embargo, el uso de este tipo de relación de equivalencia puede desembocar en una gran pérdida de información debido a que las clases de equivalencia agrupan demasiados conceptos. Para abordar este problema, en esta tesis se introduce una noción debilitada de congruencia que denominamos congruencia local. La congruencia local da lugar a clases de equivalencia con estructura de subretículo convexo, siendo más flexible a la hora de agrupar conceptos pero manteniendo propiedades interesantes desde un punto de vista algebraico. Se presenta una discusión general de los principales resultados relativos al estudio y aplicación de las congruencias locales que se han obtenido a lo largo de la investigación desarrollada durante la tesis. En particular, se introduce la noción de congruencia local junto con un análisis de las propiedades que satisface, así como una relación de orden sobre el conjunto de las clases de equivalencia. Además, realizamos un análisis profundo del impacto que genera el uso de las congruencias locales en el FCA, tanto en el contexto formal como en el retículo de conceptos. En este análisis identificamos aquellas clases de equivalencia de la relación inducida por una reducción de atributos, sobre las cuales actuaría la congruencia local, realizando una agrupación de conceptos diferente para obtener subretículos convexos. Adicionalmente, llevamos a cabo un estudio sobre el uso de las congruencias locales cuando en la reducción de atributos considerada se han eliminado todos los atributos innecesarios del contexto, obtienen resultados interesantes. Presentamos diversos mecanismos que permiten calcular congruencias locales y aplicarlas sobre retículos de conceptos, detallando las modificaciones que se realizan sobre el contexto formal para proporcionar un método de reducción basado en congruencias locales. Por otra parte, otra de las estrategias que nos permite reducir la complejidad del análisis de los contextos formales son los mecanismos de factorización. Los procedimientos utilizados para factorizar permiten dividir un contexto en dos o más subcontextos formales de menor tamaño, pudiéndose estudiar por separado más fácilmente. Se presenta un estudio preliminar sobre la factorización de contextos formales difusos usando operadores modales, que no se ha publicado aún en una revista. Estos operadores modales ya han sido utilizados para extraer subcontextos independientes de un contexto formal clásico obteniéndose así una factorización del contexto original. En esta tesis estudiamos también diversas propiedades que nos ayudan a comprender mejor cómo funciona la descomposición de tablas de datos booleanos, para luego realizar una adaptación de dichas propiedades al marco de trabajo multiadjunto. El estudio de estas propiedades generales en el marco de trabajo multiadjunto será de gran relevancia para poder obtener en el futuro un procedimiento que nos permita factorizar contextos formales multiadjuntos. Por tanto, la obtención de mecanismos de factorización de contextos multiadjuntos será clave para el análisis y tratamiento de grandes bases de dato

    Riveting two-dimensional materials: exploring strain physics in atomically thin crystals with microelectromechanical systems

    Full text link
    Two dimensional (2D) materials can withstand an order of magnitude more strain than their bulk counterparts, which results in dramatic changes to electrical, thermal and optical properties. These changes can be harnessed for technological applications such as tunable light emitting diodes or field effect transistors, or utilized to explore novel physics like exciton confinement, pseudo-magnetic fields (PMFs), and even quantum gravity. However, current techniques for straining atomically thin materials offer limited control over the strain field, and require bulky pressure chambers or large beam bending equipment. This dissertation describes the development of micro-electromechanical systems (MEMS) as a platform for precisely controlling the magnitude and orientation of the strain field in 2D materials. MEMS are a versatile platform for studying strain physics. Mechanical, electrical, thermal and optical probes can all be easily incorporated into their design. Further, because of their small size and compatibility with electronics manufacturing methods, there is an achievable pathway from the laboratory bench to real-world application. Nevertheless, the incorporation of atomically thin crystals with MEMS has been hampered by fragile, non-planer structures and low friction interfaces. We have innovated two techniques to overcome these critical obstacles: micro-structure assisted transfer to place the 2D materials on the MEMS gently and precisely, and micro-riveting to create a slip-free interface between the 2D materials and MEMS. With these advancements, we were able to strain monolayer molybdenum disulfide (MoS2) to greater than 1\% strain with a MEMS for the first time. The dissertation develops the theoretical underpinnings of this result including original work on the theory of operation of MEMS chevron actuators, and strain generated PMFs in transition metal dichalcogenides, a large class of 2D materials. We conclude the dissertation with a roadmap to guide and inspire future physicists and engineers exploring strain in 2D systems and their applications. The roadmap contains ideas for next-generation fabrication techniques to improve yield, sample quality, and add capabilities. We have also included in the roadmap proposals for experiments such as a speculative technique for realizing topological quantum field theories that mimics recent theoretical wire construction methods

    Periodic solutions and chaotic dynamics in a Duffing equation model of charged particles

    Get PDF
    The emergence of chaotic behavior in many physical systems has triggered the curiosity of scientists for a long time. Their study has been concentrated in understanding which are the underlying laws that govern such dynamics and eventually aim to suppress such (often) undesired behavior. In layman terms, a system is defined chaotic when two orbits that initially are very near to each other will diverge in exponential time. Clearly, this translates to the fact that a chaotic system can hardly have regular behavior, a property that is also often required even for human-made systems. An example is that of particle accelerators used a lot in the study of experimental physics. The main principle is that of forcing a large number of particles to move periodically in a toroidal space in order to collide with each other. Another example is that of the tokamak, a particular accelerator built to generate plasma, one of the states of the matter. In both cases, it is crucial for the sake of the accelerating process, to have regular periodic behavior of the particles instead of a chaotic one. In this dissertation, we have studied the question of chaos in mathematical models for the motion of magnetically charged particles inside the tokamak in the presence or absence of plasma. We start by a model introduced by Cambon et al., which describes in general mathematical terms, also known as the Duffing modes, the formalism of the above problem. The central core of this work reviews the necessary mathematical tools to tackle this problem, such as the theorem of the Linked Twisted maps and the variational Hamiltonian equations which describe the evolutionary dynamics of the system under consideration. Extensive analytical and numerical tools are required in this thesis work to investigate the presence of chaos, known as chaos indicator. The main ones we have used here are the Poincar \u301e Map, the Maximum Lyapunov Exponent (MLE), and the SALI and GALI methods. Using the techniques mentioned above, we have studied our problem analytically and validated our results numerically for the particular case of the Duffing equation, which applies to the motion of charged particles in the tokamak. In detail, we first discuss the presence of chaotic dynamics of charged particles inside an idealized magnetic field, sug- gested by a tokamak type configuration. Our model is based on a periodically perturbed Hamiltonian system in a half-plane r \ubf 0. We propose a simple mechanism producing complex dynamics, based on the theory of Linked Twist Maps jointly with the method of stretching along the paths. A key step in our argument relies on the monotonicity of the period map associated with the unperturbed planar system. In the second part of our results, we give an analytical proof of the presence of complex dynamics for a model of charged particles in a magnetic field. Our method is based on the theory of topological horseshoes and applied to a periodically perturbed Duffing equation. The existence of chaos is proved for sufficiently large, but explicitly computable, periods. In the latter part, we study the generalized forementioned Duffing equations and study the chaoticity using the Melnikov topological method and verify the results numerically for the models of Wang & You and the tokamak one
    corecore