861 research outputs found

    An operational system for subject switching between controlled vocabularies: A computational linguistics approach

    Get PDF
    The NASA Lexical Dictionary (NLD), a system that automatically translates input subject terms to those of NASA, was developed in four phases. Phase One provided Phrase Matching, a context sensitive word-matching process that matches input phrase words with any NASA Thesaurus posting (i.e., index) term or Use reference. Other Use references have been added to enable the matching of synonyms, variant spellings, and some words with the same root. Phase Two provided the capability of translating any individual DTIC term to one or more NASA terms having the same meaning. Phase Three provided NASA terms having equivalent concepts for two or more DTIC terms, i.e., coordinations of DTIC terms. Phase Four was concerned with indexer feedback and maintenance. Although the original NLD construction involved much manual data entry, ways were found to automate nearly all but the intellectual decision-making processes. In addition to finding improved ways to construct a lexical dictionary, applications for the NLD have been found and are being developed

    Conceptual development of custom, domain-specific mashup platforms

    Get PDF
    Despite the common claim by mashup platforms that they enable end-users to develop their own software, in practice end-users still don't develop their own mashups, as the highly technical or inexistent user bases of today's mashup platforms testify. The key shortcoming of current platforms is their general-purpose nature, that privileges expressive power over intuitiveness. In our prior work, we have demonstrated that a domainspecific mashup approach, which privileges intuitiveness over expressive power, has much more potential to enable end-user development (EUD). The problem is that developing mashup platforms - domain-specific or not - is complex and time consuming. In addition, domain-specific mashup platforms by their very nature target only a small user basis, that is, the experts of the target domain, which makes their development not sustainable if it is not adequately supported and automated. With this article, we aim to make the development of custom, domain-specific mashup platforms costeffective. We describe a mashup tool development kit (MDK) that is able to automatically generate a mashup platform (comprising custom mashup and component description languages and design-time and runtime environments) from a conceptual design and to provision it as a service. We equip the kit with a dedicated development methodology and demonstrate the applicability and viability of the approach with the help of two case studies. © 2014 ACM

    Special Libraries, February 1966

    Get PDF
    Volume 57, Issue 2https://scholarworks.sjsu.edu/sla_sl_1966/1001/thumbnail.jp

    Large-Scale Pattern-Based Information Extraction from the World Wide Web

    Get PDF
    Extracting information from text is the task of obtaining structured, machine-processable facts from information that is mentioned in an unstructured manner. It thus allows systems to automatically aggregate information for further analysis, efficient retrieval, automatic validation, or appropriate visualization. This work explores the potential of using textual patterns for Information Extraction from the World Wide Web

    Feasibility of using citations as document summaries

    Get PDF
    The purpose of this research is to establish whether it is feasible to use citations as document summaries. People are good at creating and selecting summaries and are generally the standard for evaluating computer generated summaries. Citations can be characterized as concept symbols or short summaries of the document they are citing. Similarity metrics have been used in retrieval and text summarization to determine how alike two documents are. Similarity metrics have never been compared to what human subjects think are similar between two documents. If similarity metrics reflect human judgment, then we can mechanize the selection of citations that act as short summaries of the document they are citing. The research approach was to gather rater data comparing document abstracts to citations about the same document and then to statistically compare those results to several document metrics; frequency count, similarity metric, citation location and type of citation. There were two groups of raters, subject experts and non-experts. Both groups of raters were asked to evaluate seven parameters between abstract and citations: purpose, subject matter, methods, conclusions, findings, implications, readability, andunderstandability. The rater was to identify how strongly the citation represented the content of the abstract, on a five point likert scale. Document metrics were collected for frequency count, cosine, and similarity metric between abstracts and associated citations. In addition, data was collected on the location of the citations and the type of citation. Location was identified and dummy coded for introduction, method, discussion, review of the literature and conclusion. Citations were categorized and dummy coded for whether they refuted, noted, supported, reviewed, or applied information about the cited document. The results show there is a relationship between some similarity metrics and human judgment of similarity.Ph.D., Information Studies -- Drexel University, 200

    Conceptual Framework for Designing Virtual Field Trip Games

    Get PDF
    This thesis aimed to provide designing models to explore an alternative solution for a field trip when it becomes impossible for several reasons such as the limitation of cost and time. Virtual field trip games are relatively new means to create virtual field trips in game environments through adding game aspects to learning aspects to enhance the learning experience. The simple combining of game and learning aspects will not guarantee the desired effect of virtual field trips. Theoretical and logical connections should be established to form interweave between both aspects. This thesis proposes a designing framework by establishing three links between game design aspects and learning aspects. The three links are constructed by modelling: the experiential learning theory (ELT), the gameplay, and the game world. ELT modelling quantifies the theory into the internal economy mechanic and balances the levels of game task difficulty with the player’s ability through game machinations, game modelling links the learning process to gameplay, and world modelling connects field environment to game environment. The internal economy mechanic and its components (resources, internal mechanic, feedback loop), formulating equations to define generic player’s interactions and identify indicators to capture evidence of achievements via a mathematical (evaluation) model. The game modelling includes skill models to design two important high-order skills (decision-making and teamwork) and connects them to the evaluation model. The game world is modelled through defining its variables and relationships’ rules to connect both environments (game and field) expanding the evaluation model. The framework is supported by essential learning theories (ELT, task-based learning, some aspects of social learning) and pedagogical aspects (assessment, feedback, field-based structure, high-order skills) and connected to the key game elements (interaction, multimodal presentation, control of choice…etc) of field-based learning along with suitable game mechanics. The two research studies that were conducted as part of this thesis found that the designing framework is useful, usable, and provides connections between learning and game aspects and the designed VFTG based on the framework improved learning performance along with providing motivation and presence. This suggests the effectiveness of the framework

    Special Libraries, January 1977

    Get PDF
    Volume 68, Issue 1https://scholarworks.sjsu.edu/sla_sl_1977/1000/thumbnail.jp

    Knowledge-based systems for knowledge management in enterprises : Workshop held at the 21st Annual German Conference on AI (KI-97)

    Get PDF
    • …
    corecore