206,907 research outputs found

    Searching the World-Wide-Web using nucleotide and peptide sequences

    Get PDF
    *Background:* No approaches have yet been developed to allow instant searching of the World-Wide-Web by just entering a string of sequence data. Though general search engines can be tuned to accept ‘processed’ queries, the burden of preparing such ‘search strings’ simply defeats the purpose of quickly locating highly relevant information. Unlike ‘sequence similarity’ searches that employ dedicated algorithms (like BLAST) to compare an input sequence from defined databases, a direct ‘sequence based’ search simply locates quick and relevant information about a blunt piece of nucleotide or peptide sequence. This approach is particularly invaluable to all biomedical researchers who would often like to enter a sequence and quickly locate any pertinent information before proceeding to carry out detailed sequence alignment. 

*Results:* Here, we describe the theory and implementation of a web-based front-end for a search engine, like Google, which accepts sequence fragments and interactively retrieves a collection of highly relevant links and documents, in real-time. e.g. flat files like patent records, privately hosted sequence documents and regular databases. 

*Conclusions:* The importance of this simple yet highly relevant tool will be evident when with a little bit of tweaking, the tool can be engineered to carry out searches on all kinds of hosted documents in the World-Wide-Web.

*Availability:* Instaseq is free web based service that can be accessed by visiting the following hyperlink on the WWW
http://instaseq.georgetown.edu 
&#xa

    The Freshness of Web search engines’ databases

    Get PDF
    This study measures the frequency in which search engines update their indices. Therefore, 38 websites that are updated on a daily basis were analysed within a time-span of six weeks. The analysed search engines were Google, Yahoo and MSN. We find that Google performs best overall with the most pages updated on a daily basis, but only MSN is able to update all pages within a time-span of less than 20 days. Both other engines have outliers that are quite older. In terms of indexing patterns, we find different approaches at the different engines: While MSN shows clear update patterns, Google shows some outliers and the update process of the Yahoo index seems to be quite chaotic. Implications are that the quality of different search engine indices varies and not only one engine should be used when searching for current content

    The Hidden Web, XML and Semantic Web: A Scientific Data Management Perspective

    Get PDF
    The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology.Comment: EDBT - Tutorial (2011

    Impact of Digital Technology on Library Resource Sharing: Revisiting LABELNET in the Digital Age

    Get PDF
    The digital environment has facilitated resource sharing by breaking the time and distance barriers to efficient document delivery. However, for the librarians, this phenomenon has brought more challenging technical and technological issues demanding addition of more knowledge and skills to learn and new standards to develop. The overwhelming speed and growing volume of digital information is now becoming unable to acquire and manage by single libraries. Resource sharing, which used to be a side business in the librarianship trade, is now becoming the flagship operation in the library projects

    International Legal Collections at U.S. Academic Law School Libraries

    Get PDF
    This study examines how law librarians are participating in the process of creating new fields of international legal research and training. It investigates the current state of international legal collections at twelve public and private U.S. academic law school libraries, illuminating in the process some of the significant shifts that characterize the nature of professional librarianship and information science in the twenty-first century. Included in the study is a discussion of the reference works, research guides, and databases that make up these international legal collections. This is followed by a brief assessment of the trends and challenges that librarians face who work in the field of professional legal education and scholarship

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    An open reply to "What is going on at the Library of Congress?" by Thomas Mann

    Get PDF
    This is an open response to a report by Thomas Mann at the Library of Congress concerning changes in cataloging. The author contends that, although the current changes at the Library of Congress are suspect, changes are imminent and experienced catalogers must offer positive suggestions for change, otherwise they will be ignored by management

    PrisCrawler: A Relevance Based Crawler for Automated Data Classification from Bulletin Board

    Full text link
    Nowadays people realize that it is difficult to find information simply and quickly on the bulletin boards. In order to solve this problem, people propose the concept of bulletin board search engine. This paper describes the priscrawler system, a subsystem of the bulletin board search engine, which can automatically crawl and add the relevance to the classified attachments of the bulletin board. Priscrawler utilizes Attachrank algorithm to generate the relevance between webpages and attachments and then turns bulletin board into clear classified and associated databases, making the search for attachments greatly simplified. Moreover, it can effectively reduce the complexity of pretreatment subsystem and retrieval subsystem and improve the search precision. We provide experimental results to demonstrate the efficacy of the priscrawler.Comment: published in GCIS of IEEE WRI '0
    • 

    corecore