2,826 research outputs found

    XML data integrity based on concatenated hash function

    Get PDF
    Data integrity is the fundamental for data authentication. A major problem for XML data authentication is that signed XML data can be copied to another document but still keep signature valid. This is caused by XML data integrity protecting. Through investigation, the paper discovered that besides data content integrity, XML data integrity should also protect element location information, and context referential integrity under fine-grained security situation. The aim of this paper is to propose a model for XML data integrity considering XML data features. The paper presents an XML data integrity model named as CSR (content integrity, structure integrity, context referential integrity) based on a concatenated hash function. XML data content integrity is ensured using an iterative hash process, structure integrity is protected by hashing an absolute path string from root node, and context referential integrity is ensured by protecting context-related elements. Presented XML data integrity model can satisfy integrity requirements under situation of fine-grained security, and compatible with XML signature. Through evaluation, the integrity model presented has a higher efficiency on digest value-generation than the Merkle hash tree-based integrity model for XML data

    Analysis of PKI as a Means of Securing ODF Documents

    Get PDF
    Public Key Infrastructure (PKI) has for the last two decades been a means of securing systems and communication. With the adoption of Open Document Format (ODF) as an ISO standard, the question remains if the unpopular, expensive, complex and unmaintainable PKI can prove to be a viable means of securing ODF documents. This paper analyses the drawbacks of PKI and evaluates the useji.tlness of PKl in provisioning robust, cheap and maintainable XML security to XML based ODF. This paper also evaluates the existing research on XML security, more specifically fine grained access control

    JISC Preservation of Web Resources (PoWR) Handbook

    Get PDF
    Handbook of Web Preservation produced by the JISC-PoWR project which ran from April to November 2008. The handbook specifically addresses digital preservation issues that are relevant to the UK HE/FE web management community”. The project was undertaken jointly by UKOLN at the University of Bath and ULCC Digital Archives department

    New Media, Professional Sport and Political Economy

    Get PDF
    New media technologies are seen to be changing the production, delivery and consumption of professional sports and creating a new dynamic between sports fans, athletes, clubs, governing bodies and the mainstream media. However, as Bellamy and McChesney (2011) have pointed out, advances in digital technologies are taking place within social, political, and economic contexts that are strongly conditioning the course and shape of this communication revolution. This essay assesses the first wave of research on professional sport and new media technologies and concludes that early trends indicate the continuation of existing neoliberal capitalist tendencies within professional sport. Using the concept of political economy, the essay explores issues of ownership, structure, production and delivery of sport. Discussion focuses on the opportunities sports fans now have available to them and how sports organization and media corporations shifted from an initial position of uncertainty, that bordered on hostility, to one which has seen them embrace new media technologies as powerful marketing tools. The essay concludes by stating as fundamental the issues of ownership and control and advocates that greater cognizance be accorded to underlying economic structures and the enduring, all-pervasive power of neoliberal capitalism and its impact in professional sport

    BlogForever: D2.5 Weblog Spam Filtering Report and Associated Methodology

    Get PDF
    This report is written as a first attempt to define the BlogForever spam detection strategy. It comprises a survey of weblog spam technology and approaches to their detection. While the report was written to help identify possible approaches to spam detection as a component within the BlogForver software, the discussion has been extended to include observations related to the historical, social and practical value of spam, and proposals of other ways of dealing with spam within the repository without necessarily removing them. It contains a general overview of spam types, ready-made anti-spam APIs available for weblogs, possible methods that have been suggested for preventing the introduction of spam into a blog, and research related to spam focusing on those that appear in the weblog context, concluding in a proposal for a spam detection workflow that might form the basis for the spam detection component of the BlogForever software
    • 

    corecore