89,756 research outputs found

    Intelligent Personalized Searching

    Get PDF
    Search engine is a very useful tool for almost everyone nowadays. People use search engine for the purpose of searching about their personal finance, restaurants, electronic products, and travel information, to name a few. As helpful as search engines are in terms of providing information, they can also manipulate people behaviors because most people trust online information without a doubt. Furthermore, ordinary users usually only pay attention the highest-ranking pages from the search results. Knowing this predictable user behavior, search engine providers such as Google and Yahoo take advantage and use it as a tool for them to generate profit. Search engine providers are enterprise companies with the goal to generate profit, and an easy way for them to do so is by ranking up particular web pages to promote the product or services of their own or their paid customers. The results from search engine could be misleading. The goal of this project is to filter the bias from search results and provide best matches on behalf of users’ interest

    Italian center for Astronomical Archives publishing solution: modular and distributed

    Get PDF
    The Italian center for Astronomical Archives tries to provide astronomical data resources as interoperable services based on IVOA standards. Its VO expertise and knowledge comes from active participation within IVOA and VO at European and international level, with a double-fold goal: learn from the collaboration and provide inputs to the community. The first solution to build an easy to configure and maintain resource publisher conformant to VO standards proved to be too optimistic. For this reason it has been necessary to re-think the architecture with a modular system built around the messaging concept, where each modular component speaks to the other interested parties through a system of broker-managed queues. The first implemented protocol, the Simple Cone Search, shows the messaging task architecture connecting the parametric HTTP interface to the database backend access module, the logging module, and allows multiple cone search resources to be managed together through a configuration manager module. Even if relatively young, it already proved the flexibility required by the overall system when the database backend changed from MySQL to PostgreSQL+PgSphere. Another implementation test has been made to leverage task distribution over multiple servers to serve simultaneously: FITS cubes direct linking, cubes cutout and cubes positional merging. Currently the implementation of the SIA-2.0 standard protocol is ongoing while for TAP we will be adapting the TAPlib library. Alongside these tools a first administration tool (TASMAN) has been developed to ease the build up and maintenance of TAP_SCHEMA-ta including also ObsCore maintenance capability. Future work will be devoted at widening the range of VO protocols covered by the set of available modules, improve the configuration management and develop specific purpose modules common to all the service components.Comment: SPIE Astronomical Telescopes + Instrumentation 2018, Software and Cyberinfrastructure for Astronomy V, pre-publishing draft proceeding (reduced abstract

    EXPERT SYSTEMS

    Get PDF
    In recent decades IT and computer systems have evolved rapidly in economic informatics field. The goal is to create user friendly information systems that respond promptly and accurately to requests. Informatics systems evolved into decision assisted systems, and such systems are converted, based on gained experience, in expert systems for creative problem solving that an organization is facing. Expert systems are aimed at rebuilding human reasoning on the expertise obtained from experts, stores knowledge, establishes links between knowledge, have the knowledge and ability to perform human intellectual activities. From the informatics development point of view, expert systems are based on the principle of the knowledge separation from the treating program. Expert systems simulate the human experts reasoning on knowledge available to them, multiply the knowledge and explain their own lines of reasoning.expert systems, artificial intelligence, knowledge, expertise

    Search Engines Giving You Garbage? Put A Corc In It, Implementing The Cooperative Online Resource Catalog

    Full text link
    This paper presents an implementation strategy for adding Internet resources to a library online catalog using OCLC\u27s Cooperative Online Resource Catalog (CORC). Areas of consideration include deciding which electronic resources to include in the online catalog and how to select them. The value and importance of pathfinders in creating electronic bibliographies and the role of library staff in updating them is introduced. Using an electronic suggestion form as a means of Internet resource collection development is another innovative method of enriching library collections. Education and training for cataloging staff on Dublin Core elements is also needed. Attention should be paid to the needs of distance learners in providing access to Internet resources. The significance of evaluating the appropriateness of Internet resources for library collections is emphasized

    The NASA Astrophysics Data System: Architecture

    Full text link
    The powerful discovery capabilities available in the ADS bibliographic services are possible thanks to the design of a flexible search and retrieval system based on a relational database model. Bibliographic records are stored as a corpus of structured documents containing fielded data and metadata, while discipline-specific knowledge is segregated in a set of files independent of the bibliographic data itself. The creation and management of links to both internal and external resources associated with each bibliography in the database is made possible by representing them as a set of document properties and their attributes. To improve global access to the ADS data holdings, a number of mirror sites have been created by cloning the database contents and software on a variety of hardware and software platforms. The procedures used to create and manage the database and its mirrors have been written as a set of scripts that can be run in either an interactive or unsupervised fashion. The ADS can be accessed at http://adswww.harvard.eduComment: 25 pages, 8 figures, 3 table

    Genesis of Altmetrics or Article-level Metrics for Measuring Efficacy of Scholarly Communications: Current Perspectives

    Get PDF
    The article-level metrics (ALMs) or altmetrics becomes a new trendsetter in recent times for measuring the impact of scientific publications and their social outreach to intended audiences. The popular social networks such as Facebook, Twitter, and Linkedin and social bookmarks such as Mendeley and CiteULike are nowadays widely used for communicating research to larger transnational audiences. In 2012, the San Francisco Declaration on Research Assessment got signed by the scientific and researchers communities across the world. This declaration has given preference to the ALM or altmetrics over traditional but faulty journal impact factor (JIF)-based assessment of career scientists. JIF does not consider impact or influence beyond citations count as this count reflected only through Thomson Reuters' Web of Science database. Furthermore, JIF provides indicator related to the journal, but not related to a published paper. Thus, altmetrics now becomes an alternative metrics for performance assessment of individual scientists and their contributed scholarly publications. This paper provides a glimpse of genesis of altmetrics in measuring efficacy of scholarly communications and highlights available altmetric tools and social platforms linking altmetric tools, which are widely used in deriving altmetric scores of scholarly publications. The paper thus argues for institutions and policy makers to pay more attention to altmetrics based indicators for evaluation purpose but cautions that proper safeguards and validations are needed before their adoption

    An operational system for subject switching between controlled vocabularies: A computational linguistics approach

    Get PDF
    The NASA Lexical Dictionary (NLD), a system that automatically translates input subject terms to those of NASA, was developed in four phases. Phase One provided Phrase Matching, a context sensitive word-matching process that matches input phrase words with any NASA Thesaurus posting (i.e., index) term or Use reference. Other Use references have been added to enable the matching of synonyms, variant spellings, and some words with the same root. Phase Two provided the capability of translating any individual DTIC term to one or more NASA terms having the same meaning. Phase Three provided NASA terms having equivalent concepts for two or more DTIC terms, i.e., coordinations of DTIC terms. Phase Four was concerned with indexer feedback and maintenance. Although the original NLD construction involved much manual data entry, ways were found to automate nearly all but the intellectual decision-making processes. In addition to finding improved ways to construct a lexical dictionary, applications for the NLD have been found and are being developed

    Machine aided indexing from natural language text

    Get PDF
    The NASA Lexical Dictionary (NLD) Machine Aided Indexing (MAI) system was designed to (1) reuse the indexing of the Defense Technical Information Center (DTIC); (2) reuse the indexing of the Department of Energy (DOE); and (3) reduce the time required for original indexing. This was done by automatically generating appropriate NASA thesaurus terms from either the other agency's index terms, or, for original indexing, from document titles and abstracts. The NASA STI Program staff devised two different ways to generate thesaurus terms from text. The first group of programs identified noun phrases by a parsing method that allowed for conjunctions and certain prepositions, on the assumption that indexable concepts are found in such phrases. Results were not always satisfactory, and it was noted that indexable concepts often occurred outside of noun phrases. The first method also proved to be too slow for the ultimate goal of interactive (online) MAI. The second group of programs used the knowledge base (KB), word proximity, and frequency of word and phrase occurrence to identify indexable concepts. Both methods are described and illustrated. Online MAI has been achieved, as well as several spinoff benefits, which are also described
    • …
    corecore