229 research outputs found

    The Hidden Web, XML and Semantic Web: A Scientific Data Management Perspective

    Get PDF
    The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology.Comment: EDBT - Tutorial (2011

    Query-related data extraction of hidden web documents

    Get PDF
    The larger amount of information on the Web is stored in document databases and is not indexed by general-purpose search engines (i.e., Google and Yahoo). Such information is dynamically generated through querying databases — which are referred to as Hidden Web databases. Documents returned in response to a user query are typically presented using templategenerated Web pages. This paper proposes a novel approach that identifies Web page templates by analysing the textual contents and the adjacent tag structures of a document in order to extract query-related data. Preliminary results demonstrate that our approach effectively detects templates and retrieves data with high recall and precision

    Information sources and data discovery

    Full text link

    Information extraction from template-generated hidden web documents

    Get PDF
    The larger amount of information on the Web is stored in document databases and is not indexed by general-purpose search engines (such as Google and Yahoo). Databases dynamically generate a list of documents in response to a user query – which are referred to as Hidden Web databases. Such documents are typically presented to users as templategenerated Web pages. This paper presents a new approach that identifies Web page templates in order to extract queryrelated information from documents. We propose two forms of representation to analyse the content of a document – Text with Immediate Adjacent Tag Segments (TIATS) and Text with Neighbouring Adjacent Tag Segments (TNATS). Our techniques exploit tag structures that surround the textual contents of documents in order to detect Web page templates thereby extracting query-related information. Experimental results demonstrate that TNATS detects Web page templates most effectively and extracts information with high recall and precision

    Technique for proficiently yielding Deep-Web Interfaces Using Smart Crawler

    Get PDF
    Now a days, world web has most famous because of web as well as internet increased development and its effect is that there are more requirements of the techniques that are used to improve the effectiveness of locating the deep-web interface. A technique called as a web crawler that surfs the World Wide Web in automatic manner. This is also called as Web crawling or spidering. In proposed system, initial phase is Smart Crawler works upon site-based scanning for mediatory pages by implementing search engines. It prevents the traffic that colliding with huge amount of pages. Accurate outcomes are taken due to focus upon crawl. Ranking of websites is done on the basis of arrangements on the basis of the priority valuable individuals and quick in-site finding through designing most suitable links with an adaptive link-ranking. There is always trying to search the deep web databases that doesn’t connected with any of the web search tools. They are continuous insignificantly distributed as well as they are constantly modifying. This issue is overcome by implementing two crawlers such as generic crawlers and focused crawlers. Generic crawlers aggregate every frame that may be found as well as it not concentrate over a particular subject. Focused crawlers such as Form-Focused Crawler (FFC) and Adaptive Crawler for Hidden-web Entries (ACHE) may continuous to find for online databases on a specific subject. FFC is designed to work with connections, pages as well as from classifiers for focused crawling of web forms and it is extended through adding ACHE with more components for filtering and adaptive link learner. This system implements Naive Bayes classifier instead of SVM for searchable structure classifier (SFC) and a domain-specific form classifier (DSFC). Naive Bayes classifiers in machine learning are a bunch of clear probabilistic classifiers determine by implementing Bayes theorem with solid (gullible) freedom assumptions from the components. In proposed system we contribute a novel module user login for selection of authorized user who may surf the particular domain on the basis of provided data the client and that is also used for filtering the results. In this system additionally implemented the concept of pre-query as well as post-query. Pre-query works only with the form and with the pages that included it and Post-query is utilizes data collected outcomes from form submissions

    Optimising metadata to make high-value content more accessible to Google users

    Get PDF
    Purpose: This paper shows how information in digital collections that have been catalogued using high-quality metadata can be retrieved more easily by users of search engines such as Google. Methodology/approach: The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines. Findings/practical implications: The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with Internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high-quality metadata. Originality/value: The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public-sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices

    Our connection to knowledge and the tools of pedagogical intelligence /

    Get PDF
    Titre de l'écran-titre (visionné le 22 avril 2009).Bibliogr.Webographie

    Vision-Based Deep Web Data Extraction For Web Document Clustering

    Get PDF
    The design of web information extraction systems becomes more complex and time-consuming. Detection of data region is a significant problem for information extraction from the web page. In this paper, an approach to vision-based deep web data extraction is proposed for web document clustering. The proposed approach comprises of two phases: 1) Vision-based web data extraction, and 2) web document clustering. In phase 1, the web page information is segmented into various chunks. From which, surplus noise and duplicate chunks are removed using three parameters, such as hyperlink percentage, noise score and cosine similarity. Finally, the extracted keywords are subjected to web document clustering using Fuzzy c-means clustering (FCM)
    • 

    corecore