236 research outputs found

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    BlogForever: D3.1 Preservation Strategy Report

    Get PDF
    This report describes preservation planning approaches and strategies recommended by the BlogForever project as a core component of a weblog repository design. More specifically, we start by discussing why we would want to preserve weblogs in the first place and what it is exactly that we are trying to preserve. We further present a review of past and present work and highlight why current practices in web archiving do not address the needs of weblog preservation adequately. We make three distinctive contributions in this volume: a) we propose transferable practical workflows for applying a combination of established metadata and repository standards in developing a weblog repository, b) we provide an automated approach to identifying significant properties of weblog content that uses the notion of communities and how this affects previous strategies, c) we propose a sustainability plan that draws upon community knowledge through innovative repository design

    Transforming Web Data Into Knowledge - Implications for Management

    Get PDF
    Much of one’s online behavior, including browsing, shopping, posting, is recorded in databases on companies’ computers on a daily basis. Those data sets are referred to as web data. The patterns which are the indicators of one’s interests, habits, preferences or behaviors are stored within those data. More useful than an individual indicator is when a company records data on all its users and when it gains an insight into their habits and tendencies. Detecting and interpreting such patterns can help managers to make informed decisions and serve their customers better. Utilizing data mining with respect to web data is said to turn them into web knowledge. The research study conducted in this paper demonstrates how data mining methods and models can be applied to the web-based forms of data, on the one hand, and what the implications of uncovering patterns in web content, the structure and their usage are for management

    Web accessibility diagnosis, improvement and maintenance

    Get PDF
    Context: This thesis examines how organisations create and maintain their web pages with particular focus on ensuring pages are accessible. It also investigates the potential for using a Tree-Map based tool to support such web maintenance and process improvement. Novel process improvement recommendations are given and an adaptation of a class web publishing model is presented. Methods: To supplement a review of current literature, 20 accessibility specialists and 79 large organisations were surveyed. This identified web accessibility best practices and whether these practices were implemented in the reality. A subsequent assessment of the accessibility of each organisation's web site tested if certain activities could be linked with better accessibility. Finally, a controlled experiment tested the accuracy and efficiency of a Tree-map based tool for web maintenance. Results: The survey results suggested a wide variety of web accessibility awareness amongst web developers and accessibility specialists. Best practice appeared to be implemented by many organisations with the exception of training provision. It was found that when the best practices aimed specifically at web accessibility were implemented there was a significant improvement in web accessibility. The Tree-Map based tool was proved to be more efficient than and as accurate as report based tool for web maintenance activities. Conclusions of the study: Web accessibility awareness is now reasonably high amongst web developers but the extent to which it is addressed varies. Organisations which take a systematic and mature approach to accessibility have more accessible web sites. As such, accessibility should be integrated into web publishing. Better tools are also required to facilitate this systematic integratio

    BlogForever D2.4: Weblog spider prototype and associated methodology

    Get PDF
    The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype

    Internet based molecular collaborative and publishing tools

    No full text
    The scientific electronic publishing model has hitherto been an Internet based delivery of electronic articles that are essentially replicas of their paper counterparts. They contain little in the way of added semantics that may better expose the science, assist the peer review process and facilitate follow on collaborations, even though the enabling technologies have been around for some time and are mature. This thesis will examine the evolution of chemical electronic publishing over the past 15 years. It will illustrate, which the help of two frameworks, how publishers should be exploiting technologies to improve the semantics of chemical journal articles, namely their value added features and relationships with other chemical resources on the Web. The first framework is an early exemplar of structured and scalable electronic publishing where a Web content management system and a molecular database are integrated. It employs a test bed of articles from several RSC journals and supporting molecular coordinate and connectivity information. The value of converting 3D molecular expressions in chemical file formats, such as the MOL file, into more generic 3D graphics formats, such as Web3D, is assessed. This exemplar highlights the use of metadata management for bidirectional hyperlink maintenance in electronic publishing. The second framework repurposes this metadata management concept into a Semantic Web application called SemanticEye. SemanticEye demonstrates how relationships between chemical electronic articles and other chemical resources are established. It adapts the successful semantic model used for digital music metadata management by popular applications such as iTunes. Globally unique identifiers enable relationships to be established between articles and other resources on the Web and SemanticEye implements two: the Document Object Identifier (DOI) for articles and the IUPAC International Chemical Identifier (InChI) for molecules. SemanticEye’s potential as a framework for seeding collaborations between researchers, who have hitherto never met, is explored using FOAF, the friend-of-a-friend Semantic Web standard for social networks

    PaaS Cloud Service for Cost-Effective Harvesting, Processing and Linking of Unstructured Open Government Data

    Get PDF
    Selle projekti eesmĂ€rk on luua pilveteenus, mis vĂ”imaldaks struktueerimata avalike andmete töötlemist, selleks, et luua semantiline andmete (veebis olevatest dokumentidest leitud organisatsioonide, kohanimede ja isikunimede) ressursikirjeldusraamistiku - Resource Description Framework (RDF) - graaf, mis on ka masinloetav. Pilveteenus saab sisendiks veebiroomaja toodetud logifaili ĂŒle 3 miljoni reaga. Igal real on veebiaadress avalikule dokumendile, mis avatakse, loetakse ning kasutades - tööriista eestikeelsest tekstist nimeolemite leidmiseks- Estnltk-d, eraldatakse organisatsiooonide ja kohtade nimetused ja inimeste nimed. SeejĂ€rel lisatakse leitud nimed/nimetused RDF graafi, kasutades olemasolevat Pythoni teeki RDFlib. RDF graafis nimed/nimetused lingitakse nende veebiaadressidega, kus asub seda nime/nimetust sisaldav avalik dokument. Dokumendid arhiveeritakse lugemise hetkel neis olnud sisuga. Lisaks sisaldab teenus igakuist andmete ĂŒlekontrollimist, et tuvastada dokumentide muutusi ja vajadusel vĂ€rskendada RDF graafe. Genereeritud RDF graafe kasutatakse SPARQL pĂ€ringute tegemiseks, mida saavad teha kasutajad graafilise kasutajaliidese kaudu vĂ”i masinad veebiteenust kasutades. Projekti oluline vĂ€ljakutse on luua arhitektuur, mis töötleks andmeid vĂ”imalikult kiiresti, sest sisendfail on suur (test-logifailis on ĂŒle 3 miljoni rea, kus igal real olev URL vĂ”ib viidata mahukale dokumendile). Selleks jooksutab teenus seal kus vĂ”imalik, protsesse paralleelselt, kasutades Google’i virtuaalmasinaid (Google Compute Engine) ja iga virtuaalmasina kĂ”iki protsessoreid.The aim of this project is to develop a cloud platform service for transforming Open Government Data to Linked Open Government Data. This service receives log file, created by web crawler, with URLs (over 3000000) to some open document as an input. It then opens the document, reads its content and with using "Open source tools for Estonian natural language processing" (Estnltk), finds names of locations, organizations and people. Using Psython library "RDFlib", these names are added to the Resource Description Framework (RDF) graph, so that the names become linked to the URLs that refer to the documents. In order to archive current state of accessed document, this service downloads all processed documents. The service also enables monthly updates system of the already processed documents in order to generate new RDF relations if some of the documents have changed. Generated RDFs are publicly available and the service includes SPARQL endpoint for userss (graphical user interface) and machines (web services) for cost-effective querying of linked entities from the RDF files. An important challenge of this service is to speed up its performance, because the documents behind these 3+ billion URLs may be large. To achieve that, parallel processes are run where possible: using several virtual machines and all CPUs in a virtual machine. This is tested in Google Compute Engin

    A Model for Managing Information Flow on the World Wide Web

    Get PDF
    Metadata merged with duplicate record (http://hdl.handle.net/10026.1/330) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis considers the nature of information management on the World Wide Web. The web has evolved into a global information system that is completely unregulated, permitting anyone to publish whatever information they wish. However, this information is almost entirely unmanaged, which, together with the enormous number of users who access it, places enormous strain on the web's architecture. This has led to the exposure of inherent flaws, which reduce its effectiveness as an information system. The thesis presents a thorough analysis of the state of this architecture, and identifies three flaws that could render the web unusable: link rot; a shrinking namespace; and the inevitable increase of noise in the system. A critical examination of existing solutions to these flaws is provided, together with a discussion on why the solutions have not been deployed or adopted. The thesis determines that they have failed to take into account the nature of the information flow between information provider and consumer, or the open philosophy of the web. The overall aim of the research has therefore been to design a new solution to these flaws in the web, based on a greater understanding of the nature of the information that flows upon it. The realization of this objective has included the development of a new model for managing information flow on the web, which is used to develop a solution to the flaws. The solution comprises three new additions to the web's architecture: a temporal referencing scheme; an Oracle Server Network for more effective web browsing; and a Resource Locator Service, which provides automatic transparent resource migration. The thesis describes their design and operation, and presents the concept of the Request Router, which provides a new way of integrating such distributed systems into the web's existing architecture without breaking it. The design of the Resource Locator Service, including the development of new protocols for resource migration, is covered in great detail, and a prototype system that has been developed to prove the effectiveness of the design is presented. The design is further validated by comprehensive performance measurements of the prototype, which show that it will scale to manage a web whose size is orders of magnitude greater than it is today

    Interoperability of semantics in news production

    Get PDF
    • 

    corecore