2,712 research outputs found

    Data complexity measured by principal graphs

    Full text link
    How to measure the complexity of a finite set of vectors embedded in a multidimensional space? This is a non-trivial question which can be approached in many different ways. Here we suggest a set of data complexity measures using universal approximators, principal cubic complexes. Principal cubic complexes generalise the notion of principal manifolds for datasets with non-trivial topologies. The type of the principal cubic complex is determined by its dimension and a grammar of elementary graph transformations. The simplest grammar produces principal trees. We introduce three natural types of data complexity: 1) geometric (deviation of the data's approximator from some "idealized" configuration, such as deviation from harmonicity); 2) structural (how many elements of a principal graph are needed to approximate the data), and 3) construction complexity (how many applications of elementary graph transformations are needed to construct the principal object starting from the simplest one). We compute these measures for several simulated and real-life data distributions and show them in the "accuracy-complexity" plots, helping to optimize the accuracy/complexity ratio. We discuss various issues connected with measuring data complexity. Software for computing data complexity measures from principal cubic complexes is provided as well.Comment: Computers and Mathematics with Applications, in pres

    The 1st Conference of PhD Students in Computer Science

    Get PDF

    A Note on Emergence in Multi-Agent String Processing Systems

    Get PDF
    We propose a way to define (and, in a certain extent, even to measure) the phenomenon of emergence which appears in a complex system of interacting agents whose global behaviour can be described by a language and whose components (agents) can also be associated with grammars and languages. The basic idea is to identify the "linear composition of behaviours" with "closure under basic operations", such as the AFL (Abstract Families of Languages) operations, which are standard in the theory of formal languages

    From feature to paradigm: deep learning in machine translation

    No full text
    In the last years, deep learning algorithms have highly revolutionized several areas including speech, image and natural language processing. The specific field of Machine Translation (MT) has not remained invariant. Integration of deep learning in MT varies from re-modeling existing features into standard statistical systems to the development of a new architecture. Among the different neural networks, research works use feed- forward neural networks, recurrent neural networks and the encoder-decoder schema. These architectures are able to tackle challenges as having low-resources or morphology variations. This manuscript focuses on describing how these neural networks have been integrated to enhance different aspects and models from statistical MT, including language modeling, word alignment, translation, reordering, and rescoring. Then, we report the new neural MT approach together with a description of the foundational related works and recent approaches on using subword, characters and training with multilingual languages, among others. Finally, we include an analysis of the corresponding challenges and future work in using deep learning in MTPostprint (author's final draft

    A Global Workspace perspective on mental disorders

    Get PDF
    Recent developments in Global Workspace theory suggest that human consciousness can suffer interpenetrating dysfunctions of mutual and reciprocal interaction with embedding environments which will have early onset and often insidiously staged developmental progression, possibly according to a cancer model. A simple rate distortion argument implies that, if an external information source is pathogenic, then sufficient exposure to it is sure to write a sufficiently accurate image of it on mind and body in a punctuated manner so as to initiate or promote simililarly progressively punctuated developmental disorder. There can, thus, be no simple, reductionist brain chemical 'bug in the program' whose 'fix' can fully correct the problem. On the contrary, the growth of an individual over the life course, and the inevitable contact with a toxic physical, social, or cultural environment, can be expected to initiate developmental problems which will become more intrusive over time, most obviously according to some damage accumulation model, but likely according to far more subtle, highly punctuated, schemes analogous to tumorigenesis. The key intervention, at the population level, is clearly to limit such exposures, a question of proper environmental sanitation, in a large sense, a matter of social justice which has long been understood to be determined almost entirely by the interactions of cultural trajectory, group power relations, and economic structure, with public policy. Intervention at the individual level appears limited to triggering or extending periods of remission, as is the case with most cancers

    Neural Mechanisms for Information Compression by Multiple Alignment, Unification and Search

    Get PDF
    This article describes how an abstract framework for perception and cognition may be realised in terms of neural mechanisms and neural processing. This framework — called information compression by multiple alignment, unification and search (ICMAUS) — has been developed in previous research as a generalized model of any system for processing information, either natural or artificial. It has a range of applications including the analysis and production of natural language, unsupervised inductive learning, recognition of objects and patterns, probabilistic reasoning, and others. The proposals in this article may be seen as an extension and development of Hebb’s (1949) concept of a ‘cell assembly’. The article describes how the concept of ‘pattern’ in the ICMAUS framework may be mapped onto a version of the cell assembly concept and the way in which neural mechanisms may achieve the effect of ‘multiple alignment’ in the ICMAUS framework. By contrast with the Hebbian concept of a cell assembly, it is proposed here that any one neuron can belong in one assembly and only one assembly. A key feature of present proposals, which is not part of the Hebbian concept, is that any cell assembly may contain ‘references’ or ‘codes’ that serve to identify one or more other cell assemblies. This mechanism allows information to be stored in a compressed form, it provides a robust mechanism by which assemblies may be connected to form hierarchies and other kinds of structure, it means that assemblies can express abstract concepts, and it provides solutions to some of the other problems associated with cell assemblies. Drawing on insights derived from the ICMAUS framework, the article also describes how learning may be achieved with neural mechanisms. This concept of learning is significantly different from the Hebbian concept and appears to provide a better account of what we know about human learning

    Semantic networks

    Get PDF
    AbstractA semantic network is a graph of the structure of meaning. This article introduces semantic network systems and their importance in Artificial Intelligence, followed by I. the early background; II. a summary of the basic ideas and issues including link types, frame systems, case relations, link valence, abstraction, inheritance hierarchies and logic extensions; and III. a survey of ‘world-structuring’ systems including ontologies, causal link models, continuous models, relevance, formal dictionaries, semantic primitives and intersecting inference hierarchies. Speed and practical implementation are briefly discussed. The conclusion argues for a synthesis of relational graph theory, graph-grammar theory and order theory based on semantic primitives and multiple intersecting inference hierarchies
    • …
    corecore