60 research outputs found

    Representing Experimental Procedures through Diagrams at CERN’s Large Hadron Collider: The Communicatory Value of Diagrammatic Representations in Collaborative Research

    Get PDF
    The aim of this paper is to elucidate the use and role of diagrams in the design of present day high energy physics experiments. To this end, drawing upon a prominent account of diagrammatic representations advanced by the cognitive scientists Jill Larkin and Herbert Simon, I provide an analysis of the diagrammatic representations of the data selection and acquisition procedures presented in the Technical Design Report of the ATLAS experiment at CERN’s Large Hadron Collider, where the Higgs particle was discovered in 2012. Based upon this analysis, I argue that diagrams are more useful than texts in organizing and communicating the procedural information concerning the design of the aforementioned experimental procedures in the ATLAS experiment. Moreover, I point out that by virtue of their representational features, diagrams have a particular communicatory value in the collaborative work of designing the data acquisition system of the ATLAS experiment

    Human inference beyond syllogisms: an approach using external graphical representations.

    Get PDF
    Research in psychology about reasoning has often been restricted to relatively inexpressive statements involving quantifiers (e.g. syllogisms). This is limited to situations that typically do not arise in practical settings, like ontology engineering. In order to provide an analysis of inference, we focus on reasoning tasks presented in external graphic representations where statements correspond to those involving multiple quantifiers and unary and binary relations. Our experiment measured participants' performance when reasoning with two notations. The first notation used topological constraints to convey information via node-link diagrams (i.e. graphs). The second used topological and spatial constraints to convey information (Euler diagrams with additional graph-like syntax). We found that topo-spatial representations were more effective for inferences than topological representations alone. Reasoning with statements involving multiple quantifiers was harder than reasoning with single quantifiers in topological representations, but not in topo-spatial representations. These findings are compared to those in sentential reasoning tasks

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    Diagrams: Computational Modeling and Spatial Assistance. What Makes a Bunch of Marks a Diagrammatic Representation, and another Bunch

    No full text
    While much has been written on the difference between diagrammatic and sentential representations (sometimes also called propositional or symbolic representations), the explanations don’t address the fact that, when viewed as marks on paper, interpreting the marks in both cases involves use of visual perception that takes account of the spatial arrangement of marks. This paper takes account of the sense in which both – sentential representations and those that are generally taken to be diagrams – are indeed diagrams but of different things. This paper identifies the different roles played by the causality of the physical 2-D space in providing support for the representations. The Problem A fellow researcher in diagrammatic reasoning told me of a conversation with a well-known logician, who wanted to know what was so different about diagrams from predicate calculus sentential representations. After all, he said, both types of representations are made up of marks on paper, and in both types visual perception is used to interpret the marks to get at whatever is being represented. The symbol tokens and their sequencing are also kinds of diagram, since after all their spatial properties and relations play an essential role in their interpretation. What makes a bunch of marks on a piece of paper a diagrammatic representation, whereas another bunch of marks a sentential or symbolic representation? Of course, many of us have intuitions about the distinctions between diagrammatic and sentential representations, and much has been written on the general issue of the distinction between diagrammatic and sentential representations (see survey by [Shimojima 2001]), but they don’t emphasize the physical nature of the representations, as spatially distributed marks on paper in both cases. Thus the specific question that the logician asked has not been satisfactorily answered in the literature so far. This paper is an attempt to provide an answer

    Ontological foundations for structural conceptual models

    Get PDF
    In this thesis, we aim at contributing to the theory of conceptual modeling and ontology representation. Our main objective here is to provide ontological foundations for the most fundamental concepts in conceptual modeling. These foundations comprise a number of ontological theories, which are built on established work on philosophical ontology, cognitive psychology, philosophy of language and linguistics. Together these theories amount to a system of categories and formal relations known as a foundational ontolog

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    Diagrammatic Representations in Domain-Specific Languages

    Get PDF
    One emerging approach to reducing the labour and costs of software development favours the specialisation of techniques to particular application domains. The rationale is that programs within a given domain often share enough common features and assumptions to enable the incorporation of substantial support mechanisms into domain-specific programming languages and associated tools. Instead of being machine-oriented, algorithmic implementations, programs in many domain-specific languages (DSLs) are rather user-level, problem-oriented specifications of solutions. Taken further, this view suggests that the most appropriate representation of programs in many domains is diagrammatic, in a way which derives from existing design notations in the domain. This thesis conducts an investigation, using mathematical techniques and supported by case studies, of issues arising from the use of diagrammatic representations in DSLs. Its structure is conceptually divided into two parts: the first is concerned with semantic and reasoning issues; the second introduces an approach to describing the syntax and layout of diagrams, in a way which addresses some pragmatic aspects of their use. The empirical context of our work is that of IEC 1131-3, an industry standard programming language for embedded control systems. The diagrammatic syntax of IEC 1131-3 consists of circuit (i.e. box-and-wire) diagrams, emphasising a data- flow view, and variants of Petri net diagrams, suited to a control-flow view. The first contribution of the thesis is the formalisation of the diagrammatic syntax and the semantics of IEC 1131-3 languages, as a prerequisite to the application of algebraic techniques. More generally, we outline an approach to the design of diagrammatic DSLs, emphasising compositionality in the semantics of the language so as to allow the development of simple proof systems for inferring properties which are deemed essential in the domain. The control-flow subset of IEC 1131-3 is carefully evaluated, and is subsequently re-designed, to yield a straightforward proof system for a restricted, yet commonly occurring, class of safety properties. A substantial part of the thesis deals with DSLs in which programs may be represented both textually and diagrammatically, as indeed is the case with IEC 1131-3. We develop a formalisation of the data-flow diagrams in IEC 1131-

    Rhetorical ineptness in texts written by linguists

    Get PDF
    Dissertação(mestrado) - Universidade Federal de Santa Catarina, Centro de Comunicação e ExpressãoLingüistas em análise textual afirmam que sinalização discursal é um dos microfatores relevantes na organização retórica do discurso escrito, sem o qual o discurso pode promover a incongruência retórica em detrimento da interpretabilidade textual. Não obstante o consenso em torno dessa posição teórico-prática, a organização retórica de um número de textos escritos por lingüistas parece incongruente. Nesta dissertação investigo a presença de incongruência retórica no discurso lingüístico publicado em inglês, aplicando o referencial teórico de Hoey (1983) e Tadros (1985) em cinco capítulos de livros-texto escritos pelos lingüistas Wallwork (1969), Corder (1974) Bolinger (1980), Widdowson (1979), Gregory e Carroll (1978). A investigação revelou a existência de sub-sinalização e pseudo-sinalização na estrutura retórica dos discursos analisados como circunstâncias de implausibilidade textual. Proponho o micropadrão tipificado Predições Retórico-Organizacionais, caracterizado como metatextos reguladores, globais e locais, recursos de persuasão e cooperação, cotextos binários (V) ~ (D) de discurso científico escrito. O supradito micropadrão maximiza o sinergismo coerência-coesão, viabiliza a produção de estruturas textuais de informação como recurso pedagógico e ajuda a persuadir o leitor à modernização secularizada do conhecimento, da ciência e tecnologia

    A New Paradigm for Punctuation

    Get PDF
    This is a comprehensive study of punctuation, particularly the uses to which it has been put as writing developed over the centuries and as it gradually evolved from an aid to oral delivery to its use in texts that were read silently. The sudden need for standardization of punctuation which occurred with the start of printing spawned some small amount of interest in determining its purpose, but most works after printing began were devoted mainly to helping people use punctuation rather than try to discover why it was being used. Gradually, two main views on its purpose developed: it was being used for rhetorical purposes or it was needed to reveal the grammar in writing. These views are still somewhat in place. The community of linguists took little notice of writing until the last few centuries and even less notice of punctuation. The result was that few studies were done on the underlying purpose for punctuation until the twentieth century, and even those were few and far between, most of them occurring only in the last thirty years. This study argues that neither rhetoric nor grammar is directly the basis for punctuation. Rather, it responds to a schema that determines the order of the words in spoken and written English, and it is a linguistic concept without question. The special uses of the features of punctuation are discussed, as well as some anomalies in its use, some ideas for more studies, and some ideas for improving the teaching of punctuation
    corecore