1,008 research outputs found

    RSS Management: An RSS Reader to Manage RSS Feeds That Efficiently and Effectively Pulls and Filters Feeds With Minimal Bandwidth Consumption

    Get PDF
    In the early 2000s, RSS (Really Simple Syndication) was launched into cyber space and rapidly gained fame by existing as the underlying technology that fueled millions of web logs (blogs). Soon RSS feeds appeared for news, multimedia podcasting, and many other types of information on the Internet. RSS introduced a new way to syndicate information that allowed anyone interested to subscribe to published content and pull the information to an aggregator, (RSS reader application), at their discretion. RSS made it simple for people to keep up with online content without having to continuously check websites for new content. This new technology quickly had its shortcomings though. Aggregators were set to periodically check a feed for new content and if the new content did exist then the whole feed may be downloaded again and content filtering was either completely absent or filtering was performed once the file was already downloaded. Users who may have only occasionally checked a site for new content were now equipped with the ability to subscribe to content all over the web and have an aggregator poll the sites periodically for new content. However this presented a serious scalability problem in terms of bandwidth utilization. The same users that were checking a site once a day for new content were now checking the sites with the aggregator on a specific interval such as every hour. Bandwidth utilization increased dramatically where RSS was involved. The aim of this thesis is to design a better RSS aggregator that effectively and efficiently polls, downloads and filters RSS content while using a minimal amount of bandwidth and resources. To meet these needs, an RSS aggregator named FeedWorks has been developed that allows for users to create subscriptions to content and set a interval to poll that subscription for newly published material. The aggregator uses specific HTTP (hypertext transfer protocol) header information to check for new content before it downloads a file and if new content is found, downloads the file but filters it based on user-created filter criteria before it writes the information to disk. Filtering and searching algorithms have been researched to tune the performance and limit the strain on the processor. Caching mechanisms have also been used to enhance the performance of the application. The aggregator contains content management functionality to allow users to create subscriptions and subscription groups and to apply filters to a specific subscription or groups of subscriptions. This thesis compares the aggregator with other currently available products and services. It provides detailed information regarding the end user\u27s interface and the content management functionality it provides. Descriptive information is also presented that explains the content filtering and feed polling functionality and their respective algorithms

    BioMart – biological queries made easy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Biologists need to perform complex queries, often across a variety of databases. Typically, each data resource provides an advanced query interface, each of which must be learnt by the biologist before they can begin to query them. Frequently, more than one data source is required and for high-throughput analysis, cutting and pasting results between websites is certainly very time consuming. Therefore, many groups rely on local bioinformatics support to process queries by accessing the resource's programmatic interfaces if they exist. This is not an efficient solution in terms of cost and time. Instead, it would be better if the biologist only had to learn one generic interface. BioMart provides such a solution.</p> <p>Results</p> <p>BioMart enables scientists to perform advanced querying of biological data sources through a single web interface. The power of the system comes from integrated querying of data sources regardless of their geographical locations. Once these queries have been defined, they may be automated with its "scripting at the click of a button" functionality. BioMart's capabilities are extended by integration with several widely used software packages such as BioConductor, DAS, Galaxy, Cytoscape, Taverna. In this paper, we describe all aspects of BioMart from a user's perspective and demonstrate how it can be used to solve real biological use cases such as SNP selection for candidate gene screening or annotation of microarray results.</p> <p>Conclusion</p> <p>BioMart is an easy to use, generic and scalable system and therefore, has become an integral part of large data resources including Ensembl, UniProt, HapMap, Wormbase, Gramene, Dictybase, PRIDE, MSD and Reactome. BioMart is freely accessible to use at <url>http://www.biomart.org</url>.</p

    BioIMAX : a Web2.0 approach to visual data mining in bioimage data

    Get PDF
    Loyek C. BioIMAX : a Web2.0 approach to visual data mining in bioimage data. Bielefeld: Universität Bielefeld; 2012

    SDL Trados Studio 2014

    Get PDF
    corecore