51 research outputs found

    Key objectives bank. Year 9

    Get PDF

    A Data Mining Toolbox for Collaborative Writing Processes

    Get PDF
    Collaborative writing (CW) is an essential skill in academia and industry. Providing support during the process of CW can be useful not only for achieving better quality documents, but also for improving the CW skills of the writers. In order to properly support collaborative writing, it is essential to understand how ideas and concepts are developed during the writing process, which consists of a series of steps of writing activities. These steps can be considered as sequence patterns comprising both time events and the semantics of the changes made during those steps. Two techniques can be combined to examine those patterns: process mining, which focuses on extracting process-related knowledge from event logs recorded by an information system; and semantic analysis, which focuses on extracting knowledge about what the student wrote or edited. This thesis contributes (i) techniques to automatically extract process models of collaborative writing processes and (ii) visualisations to describe aspects of collaborative writing. These two techniques form a data mining toolbox for collaborative writing by using process mining, probabilistic graphical models, and text mining. First, I created a framework, WriteProc, for investigating collaborative writing processes, integrated with the existing cloud computing writing tools in Google Docs. Secondly, I created new heuristic to extract the semantic nature of text edits that occur in the document revisions and automatically identify the corresponding writing activities. Thirdly, based on sequences of writing activities, I propose methods to discover the writing process models and transitional state diagrams using a process mining algorithm, Heuristics Miner, and Hidden Markov Models, respectively. Finally, I designed three types of visualisations and made contributions to their underlying techniques for analysing writing processes. All components of the toolbox are validated against annotated writing activities of real documents and a synthetic dataset. I also illustrate how the automatically discovered process models and visualisations are used in the process analysis with real documents written by groups of graduate students. I discuss how the analyses can be used to gain further insight into how students work and create their collaborative documents

    A cognitive model of fiction writing.

    Get PDF
    Models of the writing process are used to design software tools for writers who work with computers. This thesis is concerned with the construction of a model of fiction writing. The first stage in this construction is to review existing models of writing. Models of writing used in software design and writing research include behavioural, cognitive and linguistic varieties. The arguments of this thesis are, firstly, that current models do not provide an adequate basis for designing software tools for fiction writers. Secondly, research into writing is often based on questionable assumptions concerning language and linguistics, the interpretation of empirical research, and the development of cognitive models. It is argued that Saussure's linguistics provides an alternative basis for developing a model of fiction writing, and that Barthes' method of textual analysis provides insight into the ways in which readers and writers create meanings. The result of reviewing current models of writing is a basic model of writing, consisting of a cycle of three activities - thinking, writing, and reading. The next stage is to develop this basic model into a model of fiction writing by using narratology, textual analysis, and cognitive psychology to identify the kinds of thinking processes that create fictional texts. Remembering and imagining events and scenes are identified as basic processes in fiction writing; in cognitive terms, events are verbal representations, while scenes are visual representations. Syntax is identified as another distinct object of thought, to which the processes of remembering and imagining also apply. Genette's notion of focus in his analysis of text types is used to describe the role of characters in the writer's imagination: focusing the imagination is a process in which a writer imagines she is someone else, and it is shown how this process applies to events, scenes, and syntax. It is argued that a writer's story memory, influences his remembering and imagining; Todorov's work on symbolism is used to argue that interpretation plays the role in fiction writing of binding together these two processes. The role of naming in reading and its relation to problem solving is compared with its role in writing, and names or signifiers are added to the objects of thought in fiction writing. It is argued that problem solving in fiction writing is sometimes concerned with creating problems or mysteries for the reader, and it is shown how this process applies to events, scenes, signifiers and syntax. All these findings are presented in the form of a cognitive model of fiction writing. The question of testing is discussed, and the use of the model in designing software tools is illustrated by the description of a hypertextual aid for fiction writers

    Task-based learning to improve reading skills in senior students at "Mario Oña Perdomo" high school in San Gabriel, 2021- 2022

    Get PDF
    To compile TBL activities for the development of reading skills in senior students in "Mario Oña Perdomo” high school in San Gabriel, 2021-2022.Este proyecto de investigación exploró tanto elementos pedagógicos como de aprendizaje referentes a la lectura en inglés por medio de una plataforma online que emplea el aprendizaje basado en tareas (TBL). El grupo que participó en el proyecto fueron estudiantes de tercero de bachillerato en la unidad educativa Mario Oña Perdomo de quienes se ha determinado un nivel bajo de comprensión lectora en inglés. Este proyecto explora conceptos teóricos de la metodología TBL al igual su implementación dentro de las clases de inglés; así mismo describiendo técnicas para mejorar la comprensión lectora en la lengua extranjera que fueron adaptadas a una plataforma online. Los estudiantes primeramente participaron en una encuesta de índole cualitativa para determinar hábitos y preferencias de lectura, así como problemas varios relacionados a la lectura en inglés. Luego, se realizaron una entrevista tanto al vicerrector como docente de inglés de tercero de bachillerato de la institución respecto a aspectos pedagógicos y didácticos. A través de toda la información reunida se determinó que introducir una plataforma online para el desarrollo de la destreza lectora basada en TBL compone una adición innovadora que puede mejorar de manera significativa las habilidades lectoras en inglés de los estudiantes del colegio Mario Oña Perdomo.Maestrí

    Towards a constructivist grammar curriculum for the United States

    Get PDF
    The author argues that educators must forge an alternative method to teaching grammar: the explicit, constructivist teaching of grammar within the meaningful context of a writing curriculum

    Computing point-of-view : modeling and simulating judgments of taste

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 153-163).People have rich points-of-view that afford them the ability to judge the aesthetics of people, things, and everyday happenstance; yet viewpoint has an ineffable quality that is hard to articulate in words, let alone capture in computer models. Inspired by cultural theories of taste and identity, this thesis explores end-to-end computational modeling of people's tastes-from model acquisition, to generalization, to application- under various realms. Five aesthetical realms are considered-cultural taste, attitudes, ways of perceiving, taste for food, and sense-of-humor. A person's model is acquired by reading her personal texts, such as a weblog diary, a social network profile, or emails. To generalize a person model, methods such as spreading activation, analogy, and imprimer supplementation are applied to semantic resources and search spaces mined from cultural corpora. Once a generalized model is achieved, a person's tastes are brought to life through perspective-based applications, which afford the exploration of someone else's perspective through interactivity and play. The thesis describes model acquisition systems implemented for each of the five aesthetical realms.(cont.) The techniques of 'reading for affective themes' (RATE), and 'culture mining' are described, along with their enabling technologies, which are commonsense reasoning and textual affect analysis. Finally, six perspective-based applications were implemented to illuminate a range of real-world beneficiaries to person modeling-virtual mentoring, self-reflection, and deep customization.by Xinyu Hugo Liu.Ph.D

    Making Machines Learn. Applications of Cultural Analytics to the Humanities

    Get PDF
    The digitization of several million books by Google in 2011 meant the popularization of a new kind of humanities research powered by the treatment of cultural objects as data. Culturomics, as it is called, was born, and other initiatives resonated with such a methodological approach, as is the case with the recently formed Digital Humanities or Cultural Analytics. Intrinsically, these new quantitative approaches to culture all borrow from techniques and methods developed under the wing of the exact sciences, such as computer science, machine learning or statistics. There are numerous examples of studies that take advantage of the possibilities that treating objects as data has to offer for the understanding of the human. This new data science that is now applied to the current trends in culture can also be replicated to study more traditional humanities. Led by proper intellectual inquiry, an adequate use of technology may bring answers to questions intractable by other means, or add evidence to long held assumptions based on a canon built from few examples. This dissertation argues in favor of such approach. Three different case studies are considered. First, in the more general sense of the big and smart data, we collected and analyzed more than 120,000 pictures of paintings from all periods of art history, to gain a clear insight on how the beauty of depicted faces, in the framework of neuroscience and evolutionary theory, has changed over time. A second study covers the nuances of modes of emotions employed by the Spanish Golden Age playwright Calderón de la Barca to empathize with his audience. By means of sentiment analysis, a technique strongly supported by machine learning, we shed some light into the different fictional characters, and how they interact and convey messages otherwise invisible to the public. The last case is a study of non-traditional authorship attribution techniques applied to the forefather of the modern novel, the Lazarillo de Tormes. In the end, we conclude that the successful application of cultural analytics and computer science techniques to traditional humanistic endeavours has been enriching and validating

    Extracting sentences recommended to annotate for understanding writer's opinions in a document

    No full text

    A Semi-Supervised Approach to the Construction of Semantic Lexicons

    Get PDF
    A growing number of applications require dictionaries of words belonging to semantic classes present in specialized domains. Manually constructed knowledge bases often do not provide sufficient coverage of specialized vocabulary and require substantial effort to build and keep up-to-date. In this thesis, we propose a semi-supervised approach to the construction of domain-specific semantic lexicons based on the distributional similarity hypothesis. Our method starts with a small set of seed words representing the target class and an unannotated text corpus. It locates instances of seed words in the text and generates lexical patterns from their contexts; these patterns in turn extract more words/phrases that belong to the semantic category in an iterative manner. This bootstrapping process can be continued until the output lexicon reaches the desired size. We explore employing techniques such as learning lexicons for multiple semantic classes at the same time and using feedback from competing lexicons to increase the learning precision. Evaluated for extraction of dish names and subjective adjectives from a corpus of restaurant reviews, our approach demonstrates great flexibility in learning various word classes, and also performance improvements over state of the art bootstrapping and distributional similarity techniques for the extraction of semantically similar words. Its shallow lexical patterns also prove to perform superior to syntactic patterns in capturing the semantic class of words
    corecore