296 research outputs found

    Linearity is strictly more powerful than contiguity for encoding graphs

    No full text
    Linearity and contiguity are two parameters devoted to graph encoding. Linearity is a generalization of contiguity in the sense that every encoding achieving contiguity k induces an encoding achieving linearity k, both encoding having size Theta(k.n), where n is the number of vertices of G. In this paper, we prove that linearity is a strictly more powerful encoding than contiguity, i.e. there exists some graph family such that the linearity is asymptotically negligible in front of the contiguity. We prove this by answering an open question asking for the worst case linearity of a cograph on n vertices: we provide an O(log n/log log n) upper bound which matches the previously known lower bound. (C) 2016 Elsevier B.V. All rights reserved.Region Rhone-Alpes, CNRS, Vietnam Institute for Advanced Study in Mathematics (VIASM), Vietnam National Foundation for Science and Technology Development (NAFOSTED), Fondecyt, Nucleo Milenio Informacion y Coordinacion en Redes (ACGO)

    Linearity is Strictly More Powerful than Contiguity for Encoding Graphs

    No full text
    International audienc

    A cognitive model of fiction writing.

    Get PDF
    Models of the writing process are used to design software tools for writers who work with computers. This thesis is concerned with the construction of a model of fiction writing. The first stage in this construction is to review existing models of writing. Models of writing used in software design and writing research include behavioural, cognitive and linguistic varieties. The arguments of this thesis are, firstly, that current models do not provide an adequate basis for designing software tools for fiction writers. Secondly, research into writing is often based on questionable assumptions concerning language and linguistics, the interpretation of empirical research, and the development of cognitive models. It is argued that Saussure's linguistics provides an alternative basis for developing a model of fiction writing, and that Barthes' method of textual analysis provides insight into the ways in which readers and writers create meanings. The result of reviewing current models of writing is a basic model of writing, consisting of a cycle of three activities - thinking, writing, and reading. The next stage is to develop this basic model into a model of fiction writing by using narratology, textual analysis, and cognitive psychology to identify the kinds of thinking processes that create fictional texts. Remembering and imagining events and scenes are identified as basic processes in fiction writing; in cognitive terms, events are verbal representations, while scenes are visual representations. Syntax is identified as another distinct object of thought, to which the processes of remembering and imagining also apply. Genette's notion of focus in his analysis of text types is used to describe the role of characters in the writer's imagination: focusing the imagination is a process in which a writer imagines she is someone else, and it is shown how this process applies to events, scenes, and syntax. It is argued that a writer's story memory, influences his remembering and imagining; Todorov's work on symbolism is used to argue that interpretation plays the role in fiction writing of binding together these two processes. The role of naming in reading and its relation to problem solving is compared with its role in writing, and names or signifiers are added to the objects of thought in fiction writing. It is argued that problem solving in fiction writing is sometimes concerned with creating problems or mysteries for the reader, and it is shown how this process applies to events, scenes, signifiers and syntax. All these findings are presented in the form of a cognitive model of fiction writing. The question of testing is discussed, and the use of the model in designing software tools is illustrated by the description of a hypertextual aid for fiction writers

    Using features for automated problem solving

    Get PDF
    We motivate and present an architecture for problem solving where an abstraction layer of "features" plays the key role in determining methods to apply. The system is presented in the context of theorem proving with Isabelle, and we demonstrate how this approach to encoding control knowledge is expressively different to other common techniques. We look closely at two areas where the feature layer may offer benefits to theorem proving — semi-automation and learning — and find strong evidence that in these particular domains, the approach shows compelling promise. The system includes a graphical theorem-proving user interface for Eclipse ProofGeneral and is available from the project web page, http://feasch.heneveld.org
    • …
    corecore