20 research outputs found

    Pathguide: a Pathway Resource List

    Get PDF
    Pathguide: the Pathway Resource List () is a meta-database that provides an overview of more than 190 web-accessible biological pathway and network databases. These include databases on metabolic pathways, signaling pathways, transcription factor targets, gene regulatory networks, genetic interactions, protein–compound interactions, and protein–protein interactions. The listed databases are maintained by diverse groups in different locations and the information in them is derived either from the scientific literature or from systematic experiments. Pathguide is useful as a starting point for biological pathway analysis and for content aggregation in integrated biological information systems

    SNP-RFLPing 2: an updated and integrated PCR-RFLP tool for SNP genotyping

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>PCR-restriction fragment length polymorphism (RFLP) assay is a cost-effective method for SNP genotyping and mutation detection, but the manual mining for restriction enzyme sites is challenging and cumbersome. Three years after we constructed SNP-RFLPing, a freely accessible database and analysis tool for restriction enzyme mining of SNPs, significant improvements over the 2006 version have been made and incorporated into the latest version, SNP-RFLPing 2.</p> <p>Results</p> <p>The primary aim of SNP-RFLPing 2 is to provide comprehensive PCR-RFLP information with multiple functionality about SNPs, such as SNP retrieval to multiple species, different polymorphism types (bi-allelic, tri-allelic, tetra-allelic or indels), gene-centric searching, HapMap tagSNPs, gene ontology-based searching, miRNAs, and SNP500Cancer. The RFLP restriction enzymes and the corresponding PCR primers for the natural and mutagenic types of each SNP are simultaneously analyzed. All the RFLP restriction enzyme prices are also provided to aid selection. Furthermore, the previously encountered updating problems for most SNP related databases are resolved by an on-line retrieval system.</p> <p>Conclusions</p> <p>The user interfaces for functional SNP analyses have been substantially improved and integrated. SNP-RFLPing 2 offers a new and user-friendly interface for RFLP genotyping that can be used in association studies and is freely available at <url>http://bio.kuas.edu.tw/snp-rflping2</url>.</p

    Web Citation Availability: A Follow-up Study

    Get PDF
    The researchers report on a study to examine the persistence of Web-based content. In 2002, a sample of 500 citations to Internet resources from articles published in library and information science journals in 1999 and 2000 were analyzed by citation characteristics and searched to determine cited content persistence, availability on the Web, and availability in the Internet Archive. Statistical analyses were conducted to identify citation characteristics associated with availability. The sample URLs were searched again between August 2005 and June 2006 to determine persistence, availability on the Web, and in the Internet Archive. As in the original study, the researchers cross-tabulated the results with URL characteristics and reviewed and analyzed journal instructions to authors on citing content on the Web. Findings included a decrease of 17.4 percent in persistence, and 8.2 percent in availability on the Web. When availability in the Internet Archives was factored in, the overall availability of Web content in the sample dropped from 89.2 percent to 80.6 percent. The statistical analysis confirmed the association between the likelihood that cited content will be found by future researchers and citation characteristics of content, domain, page type, and directory depth. The researchers also found an increase in the number of journals that provide instruction to authors on citing content on the Web

    Twitter Sentiment Analysis

    Get PDF
    Social media continues to gain increased presence and importance in society. Public and private opinion about a wide variety of subjects are expressed and spread continually via numerous social media. Twitter is one of the social media that is gaining increased popular. Twitter offers organizations a fast and effective way to analyze customers‟ perspectives toward the critical to success in the marketplace. Developing a program for sentiment analysis is an approach to be used to computationally measure customers‟ perceptions. This paper reports on the design of a sentiment analysis extracting a vast amount of tweets. Prototyping is used in this development. Results classify customers‟ perspective via tweets into positive and negative which is represented in pie chart and html page. However, the program has planned to develop on web application system but due to limitation of Django which can be worked on Linux server or LAMP, for further this approach need to be done

    Web Citation Availability: Analysis and Implictions for Scholarship

    Get PDF
    Five hundred citations to Internet resources from articles published in library and information science journals in 1999 and 2000 were profiled and searched on the Web. The majority contained partial bibliographic information and no date viewed. Most URLs pointed to content pages with edu or org domains and did not include a tilde. More than half (56.4%) were permanent, 81.4 percent were available on the Web, and searching the Internet Archive increased the availability rate to 89.2 percent. Content, domain, and directory depth were associated with availability. Few of the journals provided instruction on citing digital resources. Eight suggestions for improving scholarly communication citation conventions are presented

    Availability and Preservation of Scholarly Digital Resources

    Get PDF
    The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication, representing a relatively new medium for the conveyance of scientific thought and discovery. Researchers create thousands of web sites every year to share software, data and services. Unlike books and journals, however, the preservation systems are not yet mature. This carries implications that go to the core of science: the ability to examine another\u27s sources to understand and reproduce their work. These valuable resources have been documented as disappearing over time in several subject areas. This dissertation examines the problem by performing a crossdisciplinary investigation, testing the effectiveness of existing remedies and introducing new ones. As part of the investigation, 14,489 unique web pages found in the abstracts within Thomson Reuters’ Web of Science citation index were accessed. The median lifespan of these web pages was found to be 9.3 years with 62% of them being archived. Survival analysis and logistic regression identified significant predictors of URL lifespan and included the year a URL was published, the number of times it was cited, its depth as well as its domain. Statistical analysis revealed biases in current static web-page solutions

    IVOA Recommendation: Table Access Protocol Version 1.0

    Full text link
    The table access protocol (TAP) defines a service protocol for accessing general table data, including astronomical catalogs as well as general database tables. Access is provided for both database and table metadata as well as for actual table data. This version of the protocol includes support for multiple query languages, including queries specified using the Astronomical Data Query Language (ADQL [1]) and the Parameterised Query Language (PQL, under development) within an integrated interface. It also includes support for both synchronous and asynchronous queries. Special support is provided for spatially indexed queries using the spatial extensions in ADQL. A multi-position query capability permits queries against an arbitrarily large list of astronomical targets, providing a simple spatial cross-matching capability. More sophisticated distributed cross-matching capabilities are possible by orchestrating a distributed query across multiple TAP services

    Benchmarking domestic gas and electricity consumption to aid local authority carbon reduction policy

    Get PDF
    As part of an effort to be a world leader in international efforts in reducing atmospheric carbon dioxide levels, the UK Government has set itself ambitious targets to reduce carbon dioxide emissions by 80% relative to 1990 levels by 2050. To meet this target, there is a strong emphasis in reducing carbon emissions from the domestic sector through the reduction of energy consumption in UK households by improving the energy efficiency of the housing stock, and the behaviours of the occupants. The Department of Energy and Climate Change have indicated that Local Authorities in England are potentially to work in partnership with businesses and community organizations to facilitate delivery; and as a promoter of domestic energy efficiency policies. Consultation with 11 Local Authorities across England confirmed that they are lacking a reliable mechanism that can detect areas within their administrative boundaries that are most in need of intervention to improve the energy efficiency of the housing stock. For the year 2008 the regression models demonstrate that geographical variations in the size of the house, median household income, and air temperature account for 64% of the variation in English domestic gas consumption, and that variations in the size of the house, median household income, and proportion of households connected to the national gas grid account for 73% of the variation in domestic electricity consumption. The predicted values from these regression models serve as benchmarks of domestic gas and electricity consumption in England having accounted for household income, house size, house type, tenure, and climatic differences and could be used to identify areas within Local Authorities with higher than expected energy consumption for energy efficiency interventions. These results contribute to the wider academic debate over how best to achieve the overall aims of household CO2 reductions by moving beyond a purely technical or behavioural-based approach to reducing domestic energy consumption

    An approach to decentralizing search, using stigmergic hyperlinks

    Get PDF
    A stigmergic hyperlink, or “stigh”, is an object that looks and behaves like a regular HTML hyperlink, but runs at the server side. A system of stighs displays interesting emergent behaviors, of some complexity, but a stigh alone is very simple: it has a life attribute, only reinforced when users click it, and methods to provide meta-information about its destination. We reason that stigmergic hyperlinks could support a more decentralized approach to the Web search problem, particularly for addressing the “Deep Web”, which we consider all the WWW that is uncharted by search engines. We discuss vertical and horizontal solutions for the “Deep Web” and present a specialized system that makes searchable the publications hiding at the biggest Portuguese digital magazines site. The index that the system builds, feeds the search related methods of a stigmergic hyperlink linking to that destination. Our contributions are, to make the case for a broader “Deep Web” concept, that goes beyond databases hiding behind HTML forms; describe an approach that could decentralized Web search, based on stigmergic hyperlinks and a supporting business model; and to exemplify one specialized system that enables searching the biggest Portuguese digital magazines website

    E-business technology

    Get PDF
    In today's dynamic environment, where factors are changing rapidly and competition in the market is increasingly important, quality and information is much wilder and more unpredictable. Moreover, today's environment is overwhelmed by information, and it is becoming increasingly difficult to reach those who can improve their performance. Competitive analysis of e-shops in the food industry was carried out. There are many food websites in the online world, but only a small percentage of them can succeed. The paper focuses on the topic of e-commerce with a focus on building a quality e-shop. This article also aims to link e-commerce to e- commerce quality. The main focus of the thesis is research, research of literary and scientific sources. Competitive analysis was carried out in the e-shop with groceries. This information will be further used in my doctoral thesis, which deals with building a quality e-shop in the food industry. Using the research results, it will be possible to make recommendations to other Internet businesses, but the specific methodological model of website tool implementation will be suitable for e-shops. In companies, the importance of e-commerce in web quality is underestimated. This can lead to a drop in sales or a loss of competitiveness
    corecore