725 research outputs found

    Stratified Grammar Systems with Simple and Dynamically Organized Strata

    Get PDF
    Stratified grammar systems have been introduced as a grammatical model of M. Minsky hypothesis concerning how the mind works. This grammatical model is a restricted model since it is assumed that the strata of the mind are ordered in a given linear ordering. In this paper, we consider stratified grammar systems with strata organized dynamically, according to the current sentential form to be written, to meet Minsky hypothesis that the strata of the mind are organized dynamically according to the current task to be processed. We study the generative power of these systems, which we shall call dynamic stratified grammar systems, and we show that they generate the family of matrix grammars. Also, we consider simple systems by limiting the number of components comprising the stratum to be at most two components with only one rule each. Then, we show that every dynamic stratified grammar system can be represented by an equivalent simple one which demonstrates the ideas of generating complicated behaviors through more or less coordinated activities of entities with simpler behaviors

    Language and Semogenesis in Philosophy: Realizational Patternings of Ideology in Lexico-grammar

    Get PDF
    This thesis hypothesizes that the semogenetic properties of language belonging to the stratum of social context known in Systemic Functional Linguistics as ‘ideology’ are realized (at least partly) in the lexico-grammatical features of a text relating to non-categorical and grammatically metaphorical use of modality and non-categorical uses of polarity. To test this hypothesis, a section of a text by philosopher A.J. Ayer was selected. It was selected because it presents an argument in favor of a differing philosophical sense-making framework from that commonly held in society, thus making it a text more conducive to study of semogenetic properties of language and the realizational patternings thereof. The text is analyzed in terms of its lexico-grammatical features, as well as how those lexico-grammatical features are a realization of semogenesis on the stratum of ideology

    Prospects for Declarative Mathematical Modeling of Complex Biological Systems

    Full text link
    Declarative modeling uses symbolic expressions to represent models. With such expressions one can formalize high-level mathematical computations on models that would be difficult or impossible to perform directly on a lower-level simulation program, in a general-purpose programming language. Examples of such computations on models include model analysis, relatively general-purpose model-reduction maps, and the initial phases of model implementation, all of which should preserve or approximate the mathematical semantics of a complex biological model. The potential advantages are particularly relevant in the case of developmental modeling, wherein complex spatial structures exhibit dynamics at molecular, cellular, and organogenic levels to relate genotype to multicellular phenotype. Multiscale modeling can benefit from both the expressive power of declarative modeling languages and the application of model reduction methods to link models across scale. Based on previous work, here we define declarative modeling of complex biological systems by defining the operator algebra semantics of an increasingly powerful series of declarative modeling languages including reaction-like dynamics of parameterized and extended objects; we define semantics-preserving implementation and semantics-approximating model reduction transformations; and we outline a "meta-hierarchy" for organizing declarative models and the mathematical methods that can fruitfully manipulate them

    If you could see what I mean : descriptions of video in an anthropologist's video notebook

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 106-108).by Thomas G. Aguierre Smith.M.S

    Modeling Appraisal in Film: A Social Semiotic Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Acta Cybernetica : Volume 13. Number 1.

    Get PDF

    Search Interfaces on the Web: Querying and Characterizing

    Get PDF
    Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.Siirretty Doriast

    Can I help you? : a systemic-functional exploration of service encounter interaction

    Get PDF
    This exploratory study of the semiotic organization of service encounter interaction and its realization traces back the Malinowskian/Firthian contextual theory and follows its development into register theory. It captures the most recent developments of register theory which consider texts as organizations on three separate semiotic communication planes: genre, register and language. Specifically it focusses on how on the plane of genre the global patternings of texts, i.e. SCHEMATIC STRUCTURES, are represented and how they are realized by the planes of register and language which are seen to underlie genre. It studies and develops the notion of genre and its realization by using service encounter data

    Scallop: A Language for Neurosymbolic Programming

    Full text link
    We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. We evaluate Scallop on a suite of eight neurosymbolic applications from the literature. Our evaluation demonstrates that Scallop is capable of expressing algorithmic reasoning in diverse and challenging AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain knowledge, and yields solutions that are comparable or superior to state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions outperform these models in aspects such as runtime and data efficiency, interpretability, and generalizability
    corecore