1,075 research outputs found

    The Computer as a Tool for Legal Research

    Get PDF

    SOUND database of marine animal vocalizations : structure and operations

    Get PDF
    The SOUND database system for marine animal vocalizations has been updated to include changes in the structure and operations that have evolved with use. These include more convenient operations, greater flexibilty in analysis routines, and a revised database structure. The formats for data sorting and indexing, database structure, and analysis routines have developed into a convenient research tool. This report is a revision of the earlier operating manual for the SOUND databases (Watkins, Fristrup, and Daher 1991.) The interactive databases that comprise the SOUND system provide comprehensive means for quantitative analyses and statistical comparisons of marine animal vocalizations. These SOUND databases encompass (1) descriptive text databases cataoging the WHOI collection of underwater sound recordings of marine animals, (2) sets of files of digital sound sequences, (3) text databases organizing the digital sound cuts, and (4) software for analysis, display, playback, and export of selected sound files. The text databases index and sort the information about the sounds, and the digital sound cut files are accessed directly from the text record. From the text database, the sound cut data may be analyzed on screen, listened to, and compared or exported as desired. The objective of this work has been the development of a basic set of tools for the study of marine animal sound. The text databases for cataloging the recordings provide convenient sorting and selection of sounds of interest. Then, as specific sequences are digitized from these recordings, they become part of another database system that manages these acoustic data. Once a digital sound is part of the organized database, several tools are available for interactive spectrographic display, sound playback, statistical feature extraction, and export to other application programs.Funding was provided by the Office of Naval Research through the Ocean Acoustics Program (code 11250A) under Contract No. N00014-88-K-0273 and No. N00014-91-J-1445 with supplemental support by ORINCON/DARPA and NRL (code 211)

    Is Semantic Query Optimization Worthwhile?

    Get PDF
    The term quote semantic query optimization quote (SQO) denotes a methodology whereby queries against databases are optimized using semantic information about the database objects being queried. The result of semantically optimizing a query is another query which is syntactically different to the original, but semantically equivalent and which may be answered more efficiently than the original. SQO is distinctly different from the work performed by the conventional SQL optimizer. The SQL optimizer generates a set of logically equivalent alternative execution paths based ultimately on the rules of relational algebra. However, only a small proportion of the readily available semantic information is utilised by current SQL optimizers. Researchers in SQO agree that SQO can be very effective. However, after some twenty years of research into SQO, there is still no commercial implementation. In this thesis we argue that we need to quantify the conditions for which SQO is worthwhile. We investigate what these conditions are and apply this knowledge to relational database management systems (RDBMS) with static schemas and infrequently updated data. Any semantic query optimizer requires the ability to reason using the semantic information available, in order to draw conclusions which ultimately facilitate the recasting of the original query into a form which can be answered more efficiently. This reasoning engine is currently not part of any commercial RDBMS implementation. We show how a practical semantic query optimizer may be built utilising readily available semantic information, much of it already captured by meta-data typically stored in commercial RDBMS. We develop cost models which predict an upper bound to the amount of optimization one can expect when queries are pre-processed by a semantic optimizer. We present a series of empirical results to confirm the effectiveness or otherwise of various types of SQO and demonstrate the circumstances under which SQO can be effective

    Special Libraries, April 1917

    Get PDF
    Volume 8, Issue 4https://scholarworks.sjsu.edu/sla_sl_1917/1003/thumbnail.jp

    Content Management System for Personally Driven Web Site

    Get PDF
    The goal of the project was to develop a simple content management system that could be easily used both as a standalone web application and as a part of a more complex system. The application was supposed to consist of several key modules: commentary engine, search engine, navigation system and authentication system. To create a content management system the implementations of the most common design patterns provided by PHP Zend Framework were used. The application was developed and tested on Ubuntu Linux with Apache web server and MySQL database server. The final version of the project represented a content management system with a navigation system based on item categorization. The application consisted of multiple classes: the ones provided by Zend Framework and the ones created to fulfill application-specific requirements. The system extensively used the benefits of object oriented programming and the model-view-controller design pattern. The results of the project allowed making several conclusions. First, although it is faster to develop the application using the agile programming model, such a technique leads to less optimized code then, if the design of the application was supported by a unified modeling language. Second, object-oriented programming provides the application with a high level of reusability and also makes it possible to employ only certain parts of the software

    Bibliographic Control of Serial Publications

    Get PDF
    An important problem with serials is bibliographic control. What good does it do for libraries to select, acquire, record, catalog, and bind large holdings of serial publications if the contents of those serials remain a mystery to all except the few who have the opportunity to examine selected journals of continuing personal interest and have discovered some magic way of retaining the gist of the contents? Bibliographic control is the indexing and abstracting of the contents or guts of what is included in the serials. It is this control, provided by secondary publishing services, which this article will discuss. Just as there are problems with serials in general, there are some easily identifiable problems connected with their bibliographic control including: volume, overlap, costs, elements and methods, and a few other miscellaneous considerations. Some history of bibliographic control will also put the current problems in a helpful perspective. Hereafter "bibliographic control" will be designated by the term "abstracting and indexing," one of these alone, or the shorter "a & i." (I do distinguish between abstracting and indexing and believe that they are not in order of importance and difficulty.) Although a & i do provide bibliographic control, this paper will not discuss cataloging, tables of contents, back-of-the-book indexes, year-end indexes, cumulative indexes, lists of advertisers, or bibliographies. If there is to be control, there must always be indexing. Abstracting is a short cut, a convenience, and perhaps a bibliographic luxury which may be now, or is fast becoming, too rich, in light of other factors to be discussed, for library blood and for the users of libraries especially for the users of indexes who may not depend upon the library interface. Abstracting, though, provides a desirable control, and one which will continue to be advocated.published or submitted for publicatio
    • 

    corecore