220,049 research outputs found

    Glimpses of Semantic Web Technologies and Related Case Studies

    Get PDF
    Semantic web is a platform of new evolution in rapidly developing World Wide Web. Semantic web refers to extracting knowledge from large amount of data. The purpose of this paper is to give a first-hand information and description for the semantic web technology. Although several research works have been carried out in the high semantic web technology, the semantic web is yet vastly unexplored. A semantic web technological innovation is rapidly changing traditional methods of searching data and how search engines work. Few prominent semantic web case studies are presented. One of the popular applications of XML RDF is Really Simple Syndication (RSS) feed which is discussed in detail

    Agent communication network-a mobile agent computation model for Internet applications

    Get PDF
    [[abstract]]We propose a graph-based model, with a simulation, for the mobile agents to evolve over the Internet. Based on the concepts of Food Web (or Food Chain), one of the natural laws that we may use besides neural networks and genetic algorithms, we define an agent niche overlap graph and agent evolution states for the distributed computation of mobile agent evolution. The proposed computation model can be used in distributed Internet applications such as commerce programs, intelligent Web searching engine, and others.[[conferencetype]]國際[[conferencedate]]19990706~19990708[[conferencelocation]]Red Sea, Egyp

    On a Java based implementation of ontology evolution processes based on Natural Language Processing

    Get PDF
    An architecture was described Burzagli et al. (2010) that can serve as a basis for the design of a Collective Knowledge Management System. The system can be used to exploit the strengths of collective intelligence and merge the gap that exists among two expressions of web intelligence, i.e., the Semantic Web and Web 2.0. In the architecture, a key component is represented by the Ontology Evolution Manager, made up with an Annotation Engine and a Feed Adapter, which is able to interpret textual contributions that represent human intelligence (such as posts on social networking tools), using automatic learning techniques, and to insert knowledge contained therein in a structure described by an ontology. This opens up interesting scenarios for the collective knowledge management system, which could be used to provide up to date information that describes a given domain of interest, to automatically augment it, thus coping with information evolution and to make information available for browsing and searching by an ontology driven engine. This report describes a Java based implementation of the Ontology Evolution Manager within the above outlined architecture

    Z39.50 broadcast searching and Z-server response times: perspectives from CC-interop

    Get PDF
    This paper begins by briefly outlining the evolution of Z39.50 and the current trends, including the work of the JISC CC-interop project. The research crux of the paper focuses on an investigation conducted with respect to testing Z39.50 server (Z-server) response times in a broadcast (parallel) searching environment. Customised software was configured to broadcast a search to all test Z-servers once an hour, for eleven weeks. The results were logged for analysis. Most Z-servers responded rapidly. 'Network congestion' and local OPAC usage were not found to significantly influence Z-server performance. Response time issues encountered by implementers may be the result of non-response by the Z-server and how Z-client software deals with this. The influence of 'quick and dirty' Z39.50 implementations is also identified as a potential cause of slow broadcast searching. The paper indicates various areas for further research, including setting shorter time-outs and greater end-user behavioural research to ascertain user requirements in this area. The influence more complex searches, such as Boolean, have on response times and suboptimal Z39.50 implementations are also emphasised for further study. This paper informs the LIS research community and has practical implications for those establishing Z39.50 based distributed systems, as well as those in the Web Services community. The paper challenges popular LIS opinion that Z39.50 is inherently sluggish and thus unsuitable for the demands of the modern user

    Unique features of Plasmids among different Citrobacter species

    Get PDF
    The _Citrobacter_ plasmids are supposed to represent the host genetic association within the living bacterial cell. The plasmids impart various beneficial characteristics to the host, helping it to retain suitable characteristics for adaptation as well as evolution. The study aims at understanding the role of prophage in influencing host functional characteristics by horizontal gene transfer or as whole plasmids. The _Citrobacter_ plasmid can be understood by analyzing many hypothetical protein sequences within its genome. Our study included 82 hypothetical proteins in 5 _Citrobacter_ plasmids genomes. The function predictions in 31 hypothetical proteins and 3-D structures were predicted for 11 protein sequences using PS2 server. The probable function prediction was done by using Bioinformatics web tools like CDD-BLAST, INTERPROSCAN, PFAM and COGs by searching sequence databases for the presence of orthologous enzymatic conserved domains in the hypothetical sequences. This study identified many uncharacterized proteins, whose roles are yet to be discovered in _Citrobacter_ plasmids. These results for unknown proteins within plasmids can be used in linking the genetic interactions of _Citrobacter_ species and their functions in different environmental conditions

    Thesauri and Semantic Web: Discussion of the Evolution of Thesauri toward their Integration With the Semantic Web

    Get PDF
    15 p.Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.S

    Development of ListeriaBase and comparative analysis of Listeria monocytogenes

    Get PDF
    Background: Listeria consists of both pathogenic and non-pathogenic species. Reports of similarities between the genomic content between some pathogenic and non-pathogenic species necessitates the investigation of these species at the genomic level to understand the evolution of virulence-associated genes. With Listeria genome data growing exponentially, comparative genomic analysis may give better insights into evolution, genetics and phylogeny of Listeria spp., leading to better management of the diseases caused by them. Description: With this motivation, we have developed ListeriaBase, a web Listeria genomic resource and analysis platform to facilitate comparative analysis of Listeria spp. ListeriaBase currently houses 850,402 protein-coding genes, 18,113 RNAs and 15,576 tRNAs from 285 genome sequences of different Listeria strains. An AJAX-based real time search system implemented in ListeriaBase facilitates searching of this huge genomic data. Our in-house designed comparative analysis tools such as Pairwise Genome Comparison (PGC) tool allowing comparison between two genomes, Pathogenomics Profiling Tool (PathoProT) for comparing the virulence genes, and ListeriaTree for phylogenic classification, were customized and incorporated in ListeriaBase facilitating comparative genomic analysis of Listeria spp. Interestingly, we identified a unique genomic feature in the L. monocytogenes genomes in our analysis. The Auto protein sequences of the serotype 4 and the non-serotype 4 strains of L. monocytogenes possessed unique sequence signatures that can differentiate the two groups. We propose that the aut gene may be a potential gene marker for differentiating the serotype 4 strains from other serotypes of L. monocytogenes. Conclusions: ListeriaBase is a useful resource and analysis platform that can facilitate comparative analysis of Listeria for the scientific communities. We have successfully demonstrated some key utilities of ListeriaBase. The knowledge that we obtained in the analyses of L. monocytogenes may be important for functional works of this human pathogen in future. ListeriaBase is currently available at http://listeria.um.edu.my

    Longitudinal analysis of search engine query logs - temporal coverage

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Master's) -- Bilkent University, 2012.Includes bibliographical references leaves 53-60.The internet is growing day-by-day and the usage of web search engines is continuously increasing. Main page of browsers started by internet users is typically the home page of a search engine. To navigate a certain web site, most of the people prefer to type web sites’ name to search engine interface instead of using internet browsers’ address bar. Considering this important role of search engines as the main entry point to the web, we need to understand Web searching trends that are emerging over time. We believe that temporal analysis of returned query results by search engines reveals important insights for the current situation and future directions of web searching. In this thesis, we provide a large-scale analysis of the evolution of query results obtained from a real search engine at two distant points in time, namely, in 2007 and 2010, for a set of 630000 real queries. Our analyses in this work attempt to find answers to several critical questions regarding the evolution of Web search results. We believe that this work, being a large-scale longitudinal analysis of query results, would shed some light on those questions.Yılmaz, OğuzM.S

    Full-Text Indexing for Heritrix

    Get PDF
    It is useful to create personalized web crawls, and search through them later on to see the archived content and compare it with current content to see the difference and evolution of that portion of web. It is also useful for searching through the portion of web you are interested in an offline mode without need of going online. To accomplish that, in this project I focus towards indexing of the archive (ARC) files generated by an open source web-crawler named Heritrix. I developed a Java module to perform indexing on these archive files. I used large set of archive files crawled by Heritrix and tested indexing performance of the module. I also benchmarked performance for my indexer and compare these results with various other indexers. The index alone is not of much use until we can use it to search through archives and get search results. To accomplish that, I developed a JSP module using an interface for reading archive files to provide search results. As a whole, when combined with Heritrix, this project can be used to perform personalized crawls, store archive of the crawl, index the archives, and search through those archives
    corecore