21 research outputs found

    ITX Programmer's Guide

    Full text link
    ITX is a set of Java packages which allow one to write telephony applications in Java. (Some sample applications are provided with the ITX distribution.) This Guide introduces the reader to the ITX Application Programming Interface (API), starting with an overview. Subsequent sections explain each component of the API in more detail

    Link Accessibility in Electronic Journal Articles

    Full text link
    D-Lib is an electronic journal which has been available since 1995. Many of the articles in D-Lib contain references that are accompanied by URLs. Of interest is how valid these URLs are after some time goes by. An analysis of all the references within D-Lib articles shows that 85% of the 5 1/2 years of references remain accessible. However, by plotting the % accessible against date of the article, it is clear that link rot increases with age

    Automatic extraction of reference linking information from online documents

    No full text
    The Web, with its explosive growth, is becoming an efficient resource for up-todate information for the scientific researcher. Informal online archives are repositories for technical reports. Proceedings are more and more commonly published on the Web. The collection of online journals is growing. Indeed, a good number of online journals are “born digital”. Many researchers simply put their papers up on their own web site. The large volume of online material makes it quite desirable to be able to access cited documents immediately from the citing paper. Implementing this direct access is called “reference linking”. Some reference linking services exist today. A number of commercial publishers, recognizing the significant value-added nature of reference linking, have banded together to form the CrossRef organization. The CrossRef publishers share their metadata, which enables them to interlink their journals. This metadata is not, however, available without a fee to organizations or individuals outside of CrossRef. The vast majority of online scholarly literature is accompanied by little or no metadata. Since it is desirable to link up this literature as well, the problem of automatically reference linking online scholarly literature in the absence of metadata and author intervention is a problem very much worth considering. This paper explores this problem in detail, and presents some algorithms for extracting metadata from online texts and linking full-text documents together. The extent to which reference linking of the online literature can be done automatically is therefore the main topic of this paper

    An Architecture for Reference Linking

    Full text link
    The Digital Library Research Group at Cornell has Reference Linking as one of its projects. Typical projects within in the group take an object-oriented approach to handling digital information. To support reference linking, therefore, we designed a scheme whereby reference linking information is extracted from archives by {\em surrogate} objects and then presented to client applications or users by means of a well-defined API. This paper describes that architecture, the API, and how the API might be supported in the Dienst protocol

    Using High Performance Systems to Build Collections for a Digital Library

    No full text
    Nothing is more distributed than the Web, with its content spread across thousands of servers. High performance hardware and software is essential for an effective download, analysis, and organization of this content. We describe our experience with a highly parallel Web crawling system (Mercator) to construct -- automatically -- collections of scientific resources for the National Science Digital Library

    Update on Tools for Parallel Programming at the CNSF

    Full text link
    Not many CNSF users undertake the arduous task of parallelizing their programs. Of course, training, education, and available hardware will have a lot to do with changing this situation, but we feel that tools also have an important role to play. In 1989, we wrote: "At the present time, the general lack of parallel programming tools is an inhibitor to parallel programming at the Cornell National Supercomputer Facility (CNSF). The Technology Integration Group (TIG)is evaluating a number of tools designed to make parallel programming easier, including tools for source analysis, program development and execution analysis. The more effective tools will be 'mainstreamed', i.e. turned over to users, integrated into workshops and consulted on by staff." This provides an update to that status report. The major section ofthis paper describes Tools for Parallel Programming, divided into 12 categories. Each category is summarized in pretty much the same way as in 1989, and then new status and prospects are discussed. The paper concludes with some comments on hybrid program development systems andthe workstation environment. The appendices contain a table of all the tools and list acronyms, names, and institutions

    Initial Experiments in the Integration of ParaScope and Lambda

    Full text link
    This document describes the incorporation of the Lambda loop transformation Toolkit into the ParaScope parallel programming environment. The goal was to extend the functionality of ParaScope, to determine the usefulness of the Lambda Toolkit in environments other than that of its original development, and to evaluate the quality of code generation before and after incorporation of Lambda-based analysis and transformation. We learned that ParaScope could be extended, but only by very brave people; we learned that the Lambda Toolkit could be used by other programming systems to good effect; we also compared two different proposed interfaces for the Lambda Toolkit

    Reference linking the Web’s scholarly papers

    No full text
    Along with the explosive growth of the Web has come a great increase in on-line scholarly literature. Thus the Web is becoming an efficient source of up-to-date information for the scientific researcher, and more and more researchers are turning to their computers to keep current on results in their field. Not only is Web retrieval usually faster than a walk to the library, but the information obtained from the Web is potentially more current than what appears in printed publications. The increasing proportion of on-line scholarly literature makes it possible to implement functionality desirable to all researchers – the ability to access cited documents immediately from the citing paper. Implementing this direct access is called “reference linking”. While many authors insert explicit links into their papers to support reference linking, it is by no means a universal practice. The approach taken by the Digital Library Research Group at Cornell employs value-added surrogates to enhance the reference-linking behavior of Web documents. Given the URL of an on-line paper, a surrogate object is constructed for that paper. The surrogate fetches the content of the document and parses it to automatically extract reference linking data. Applications can then use the surrogate to access this reference linking data, encoded in XML, via a well-defined API. We use this API to reference link the D-Lib magazine, an on-line journal of technical papers relating to digital library research. Currently we are (automatically) extracting reference linking information from the papers in this journal with 80 % accuracy

    Optimization and Parallelization of a Commodity Trade Model for the SP1, Using Parallel Programming Tools

    No full text
    We compare two different approaches to parallelization of Fortran programs. The first approach is to optimize the serial code so that it runs as fast as possible on a single processor, and then parallelize that. The second approach is to parallelize the program immediately, and then optimize the parallel version. In this paper a variety of parallel programming tools is used to obtain an optimal, parallel version of an economic policy modeling application for the IBM SP1. We apply a new technique called Data Access Normalization; we use an extended ParaScope as our parallel programming environment; we use FORGE 90 as our parallelizer; and we use KAP as our optimizer. We make a number of observations about the effectiveness of these tools. Both strategies obtain a working, parallel program, but use different tools to get there. On this occasion, both KAP and Data Access Normalization lead to the same critical transformation of inverting four of the twelve loop nests in the original progr..
    corecore