1,636 research outputs found

    REL: a Rapidly Extensible Language System II. REL English

    Get PDF
    REL, a Rapidly Extensible Language System, is an integrated information system operating in conversational interaction with the computer. It is intended for work with large or small data bases by means of highly individualized languages. The architecture of REL is based on theoretical assumptions about human information dynamics [I], among them the expanding process of conceptualization in working with data, and the idiosyncratic language use of the individual workers. The result of these assumptions is a system which allows the construction of highly individualized languages which are closely knit with the structure of the data and which can be rapidly extended and augmented with new concepts and structures through a facile definitional capability. The REL language processor is designed to accommodate a variety of languages whose structural characteristics may be considerably divergent. The REL English is one of the languages within the REL system. It is intended to facilitate sophisticated work with computers without the need for mastering programming languages. The structural power of REL English matches the extremely flexible organization of data in ring forms. Extensions of the basic REL English language can be achieved either through defining new concepts and structures in terms of the existing ones or through addition of new rules

    REL: a Rapidly Extensible Language System II. REL English

    Get PDF
    REL, a Rapidly Extensible Language System, is an integrated information system operating in conversational interaction with the computer. It is intended for work with large or small data bases by means of highly individualized languages. The architecture of REL is based on theoretical assumptions about human information dynamics [I], among them the expanding process of conceptualization in working with data, and the idiosyncratic language use of the individual workers. The result of these assumptions is a system which allows the construction of highly individualized languages which are closely knit with the structure of the data and which can be rapidly extended and augmented with new concepts and structures through a facile definitional capability. The REL language processor is designed to accommodate a variety of languages whose structural characteristics may be considerably divergent. The REL English is one of the languages within the REL system. It is intended to facilitate sophisticated work with computers without the need for mastering programming languages. The structural power of REL English matches the extremely flexible organization of data in ring forms. Extensions of the basic REL English language can be achieved either through defining new concepts and structures in terms of the existing ones or through addition of new rules

    WAQS : a web-based approximate query system

    Get PDF
    The Web is often viewed as a gigantic database holding vast stores of information and provides ubiquitous accessibility to end-users. Since its inception, the Internet has experienced explosive growth both in the number of users and the amount of content available on it. However, searching for information on the Web has become increasingly difficult. Although query languages have long been part of database management systems, the standard query language being the Structural Query Language is not suitable for the Web content retrieval. In this dissertation, a new technique for document retrieval on the Web is presented. This technique is designed to allow a detailed retrieval and hence reduce the amount of matches returned by typical search engines. The main objective of this technique is to allow the query to be based on not just keywords but also the location of the keywords within the logical structure of a document. In addition, the technique also provides approximate search capabilities based on the notion of Distance and Variable Length Don\u27t Cares. The proposed techniques have been implemented in a system, called Web-Based Approximate Query System, which contains an SQL-like query language called Web-Based Approximate Query Language. Web-Based Approximate Query Language has also been integrated with EnviroDaemon, an environmental domain specific search engine. It provides EnviroDaemon with more detailed searching capabilities than just keyword-based search. Implementation details, technical results and future work are presented in this dissertation

    M2: An architectural system for computer design

    Get PDF
    The number of embedded computer systems has been growing rapidly as system costs have declined and capabilities have increased. The rationale behind design decisions for embedded systems is often informal and based on estimates of key values rather than actual measurements. Because of the small number of programs typically executed by an embedded processor, significant opportunities for optimization exist;M2 is an architectural system for computer design. It consists of language tools, architectural tools, and implementation tools. The language tools gather information about programs at compile time and at execution time. This information is used by the implementation tools to generate candidate processor implementations which are evaluated with the architectural tools. The evaluation involves comparing the size, speed, power, cost, and reliability of candidates to constraints set by the M2 user;An M2 design is based on actual program measurements and is documented so its derivation can be publicly considered. It is generated in less time and with fewer errors than manual methods;The M2 project is an extension of work being performed at Stanford University on a workbench for computer architects and of work being performed at the University of Southwestern Louisiana on plausibility-driven design

    A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge

    Full text link
    We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results.Comment: 25 pages, 10 figure

    Multilingual XML-Based Named Entity Recognition for E-Retail Domains

    Get PDF
    We describe the multilingual Named Entity Recognition and Classification (NERC) subpart of an e-retail product comparison system which is currently under development as part of the EU-funded project CROSSMARC. The system must be rapidly extensible, both to new languages and new domains. To achieve this aim we use XML as our common exchange format and the monolingual NERC components use a combination of rule-based and machine-learning techniques. It has been challenging to process web pages which contain heavily structured data where text is intermingled with HTML and other code. Our preliminary evaluation results demonstrate the viability of our approach

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    The NASA Astrophysics Data System: Architecture

    Full text link
    The powerful discovery capabilities available in the ADS bibliographic services are possible thanks to the design of a flexible search and retrieval system based on a relational database model. Bibliographic records are stored as a corpus of structured documents containing fielded data and metadata, while discipline-specific knowledge is segregated in a set of files independent of the bibliographic data itself. The creation and management of links to both internal and external resources associated with each bibliography in the database is made possible by representing them as a set of document properties and their attributes. To improve global access to the ADS data holdings, a number of mirror sites have been created by cloning the database contents and software on a variety of hardware and software platforms. The procedures used to create and manage the database and its mirrors have been written as a set of scripts that can be run in either an interactive or unsupervised fashion. The ADS can be accessed at http://adswww.harvard.eduComment: 25 pages, 8 figures, 3 table
    corecore