312 research outputs found

    Fuzzy linear programming problems : models and solutions

    No full text
    We investigate various types of fuzzy linear programming problems based on models and solution methods. First, we review fuzzy linear programming problems with fuzzy decision variables and fuzzy linear programming problems with fuzzy parameters (fuzzy numbers in the definition of the objective function or constraints) along with the associated duality results. Then, we review the fully fuzzy linear programming problems with all variables and parameters being allowed to be fuzzy. Most methods used for solving such problems are based on ranking functions, alpha-cuts, using duality results or penalty functions. In these methods, authors deal with crisp formulations of the fuzzy problems. Recently, some heuristic algorithms have also been proposed. In these methods, some authors solve the fuzzy problem directly, while others solve the crisp problems approximately

    English Index

    Get PDF
    No abstract

    A Rose is a Rose is a Rose

    Get PDF

    Discrimination in lexical decision.

    Get PDF
    In this study we present a novel set of discrimination-based indicators of language processing derived from Naive Discriminative Learning (ndl) theory. We compare the effectiveness of these new measures with classical lexical-distributional measures-in particular, frequency counts and form similarity measures-to predict lexical decision latencies when a complete morphological segmentation of masked primes is or is not possible. Data derive from a re-analysis of a large subset of decision latencies from the English Lexicon Project, as well as from the results of two new masked priming studies. Results demonstrate the superiority of discrimination-based predictors over lexical-distributional predictors alone, across both the simple and primed lexical decision tasks. Comparable priming after masked corner and cornea type primes, across two experiments, fails to support early obligatory segmentation into morphemes as predicted by the morpho-orthographic account of reading. Results fit well with ndl theory, which, in conformity with Word and Paradigm theory, rejects the morpheme as a relevant unit of analysis. Furthermore, results indicate that readers with greater spelling proficiency and larger vocabularies make better use of orthographic priors and handle lexical competition more efficiently

    The Lexicon Graph Model : a generic model for multimodal lexicon development

    Get PDF
    Trippel T. The Lexicon Graph Model : a generic model for multimodal lexicon development. Bielefeld (Germany): Bielefeld University; 2006.Das Lexicon Graph Model stellt ein Modell für Lexika dar, die korpusbasiert sein können und multimodale Informationen enthalten. Hierbei wird die Perspektive der Lexikontheorie eingenommen, wobei die zugrundeliegenden Datenstrukturen sowohl vom Lexikon als auch von Annotationen betrachtet werden. Letztere fallen dadurch in das Blickfeld, weil sie als Grundlage für die Erstellung von Lexika gesehen werden. Der Begriff des Lexikons bezieht sich hier sowohl auf den Bereich des Wörterbuchs als auch der in elektronischen Applikationen integrierten Lexikondatenbanken. Die existierenden Formalismen und Ansätze der Lexikonentwicklung zeigen verschiedene Probleme im Zusammenhang mit Lexika auf, etwa die Zusammenfassung von existierenden Lexika zu einem, die Disambiguierung von Mehrdeutigkeiten im Lexikon auf verschiedenen lexikalischen Ebenen, die Repräsentation von anderen Modalitäten im Lexikon, die Selektion des lexikalischen Schlüsselbegriffs für Lexikonartikel, etc. Der vorliegende Ansatz geht davon aus, dass sich Lexika zwar in ihrem Inhalt, nicht aber in einer grundlegenden Struktur unterscheiden, so dass verschiedenartige Lexika im Rahmen eines Unifikationsprozesses dublettenfrei miteinander verbunden werden können. Hieraus resultieren deklarative Lexika. Für Lexika können diese Graphen mit dem Lexikongraph-Modell wie hier dargestellt modelliert werden. Dabei sind Lexikongraphen analog den von Bird und Libermann beschriebenen Annotationsgraphen gesehen und können daher auch ähnlich verarbeitet werden. Die Untersuchung des Lexikonformalismus beruht auf vier Schritten. Zunächst werden existierende Lexika analysiert und beschrieben. Danach wird mit dem Lexikongraph-Modell eine generische Darstellung von Lexika vorgestellt, die auch implementiert und getestet wird. Basierend auf diesem Formalismus wird die Beziehung zu Annotationsgraphen hergestellt, wobei auch beschrieben wird, welche Maßstäbe an angemessene Annotationen für die Verwendung zur Lexikonentwicklung angelegt werden müssen.The Lexicon Graph Model provides a model and framework for lexicons that can be corpus based and contain multimodal information. The focus is more from the lexicon theory perspective, looking at the underlying data structures that are part of existing lexicons and corpora. The term lexicon in linguistics and artificial intelligence is used in different ways, including traditional print dictionaries in book form, CD-ROM editions, Web based versions of the same, but also computerized resources of similar structures to be used by applications. These applications cover systems for human-machine communication as well as spell checkers. The term lexicon in this work is used as the most generic term covering all lexical applications. Existing formalisms in lexicon development show different problems with lexicons, for example combining different kinds of lexical resources, disambiguation on different lexical levels, the representation of different modalities in a lexicon. The Lexicon Graph Model presupposes that lexicons can have different structures but have fundamentally a similar structure, making it possible to combine lexicons in a unification process, resulting in a declarative lexicon. The underlying model is a graph, the Lexicon Graph, which is modeled similar to Annotation Graphs as described by Bird and Libermann. The investigation of the lexicon formalism contains four steps, that is the analysis of existing lexicons, the introduction of the Lexicon Graph Model as a generic representation for lexicons, the implementation of the formalism in different contexts and an evaluation of the formalism. It is shown that Annotation Graphs and Lexicon Graphs are indeed related not only in their formalism and it is shown, what standards have to be applied to annotations to be usable for lexicon development

    A generalization of totally unimodular and network matrices.

    Get PDF
    In this thesis we discuss possible generalizations of totally unimodular and network matrices. Our purpose is to introduce new classes of matrices that preserve the advantageous properties of these well-known matrices. In particular, our focus is on the polyhedral consequences of totally unimodular matrices, namely we look for matrices that can ensure vertices that are scalable to an integral vector by an integer k. We argue that simply generalizing the determinantal structure of totally unimodular matrices does not suffice to achieve this goal and one has to extend the range of values the inverses of submatrices can contain. To this end, we define k-regular matrices. We show that k-regularity is a proper generalization of total unimodularity in polyhedral terms, as it guarantees the scalability of vertices. Moreover, we prove that the k-regularity of a matrix is necessary and sufficient for substituting mod-k cuts for rank-1 Chvatal-Gomory cuts. In the second part of the thesis we introduce binet matrices, an extension of network matrices to bidirected graphs. We provide an algorithm to calculate the columns of a binet matrix using the underlying graphical structure. Using this method, we prove some results about binet matrices and demonstrate that several interesting classes of matrices are binet. We show that binet matrices are 2-regular, therefore they provide half-integral vertices for a polyhedron with a binet constraint matrix and integral right hand side vector. We also prove that optimization on such a polyhedron can be carried out very efficiently, as there exists an extension of the network simplex method for binet matrices. Furthermore, the integer optimization with binet matrices is equivalent to solving a matching problem. We also describe the connection of k-regular and binet matrices to other parts of combinatorial optimization, notably to matroid theory and regular vectorspaces
    corecore