1,219 research outputs found

    On the Turing completeness of the Semantic Web

    Get PDF
    The evidenced fact that “Linking is as powerful as computing” in a dynamic web context has lead to evaluating Turing completeness for hypertext systems based on their linking model. The same evaluation can be applied to the Semantic Web domain too. RDF is the default data model of the Semantic Web links, so the evaluation comes back to whether or not RDF can support the required computational power at the linking level. RDF represents semantic relationships with explicitly naming the participating triples, however the enumeration is only one method amongst many for representing relations, and not always the most efficient or viable. In this paper we firstly consider that Turing completeness of binary-linked hypertext is realized if and only if the links are dynamic (functional). Ashman’s Binary Relation Model (BRM) showed that binary relations can most usefully be represented with Mili’s pE (predicate-expression) representation, and Moreau and Hall concluded that hypertext systems which use the pE representation as the basis for their linking (relation) activities are Turing-complete. Secondly we consider that RDF –as it is- is a static version of a general ternary relations model, called TRM. We then conclude that the current computing power of the Semantic Web depends on the dynamicity supported by its underlying TRM. The value of this is firstly that RDF’s triples can be considered within a framework and compared to alternatives, such as the TRM version of pE, designated pfE (predicate-function-expression). Secondly, that a system whose relations are represented with pfE is likewise going to be Turing-complete. Thus moving from RDF to a pfE representation of relations would give far greater power and flexibility within the Semantic Web applications

    Emergency and on-demand health care: modelling a large complex system

    No full text
    This paper describes how system dynamics was used as a central part of a whole-system review of emergency and on-demand health care in Nottingham, England. Based on interviews with 30 key individuals across health and social care, a 'conceptual map' of the system was developed, showing potential patient pathways through the system. This was used to construct a stock-flow model, populated with current activity data, in order to simulate patient flows and to identify system bottle-necks. Without intervention, assuming current trends continue, Nottingham hospitals are unlikely to reach elective admission targets or achieve the government target of 82% bed occupancy. Admissions from general practice had the greatest influence on occupancy rates. Preventing a small number of emergency admissions in elderly patients showed a substantial effect, reducing bed occupancy by 1% per annum over 5 years. Modelling indicated a range of undesirable outcomes associated with continued growth in demand for emergency care, but also considerable potential to intervene to alleviate these problems, in particular by increasing the care options available in the community

    Are we talking about the same structure?: A unified approach to hypertext links, xml, rdf and zigzag

    Get PDF
    There are many different hypertext systems and paradigms, each with their apparent advantages. However the distinctions are perhaps not as significant as they seem. If we can reduce the core linking functionality to some common structure, which allows us to consider hypertext systems within a common model, we could identify what, if anything, distinguishes hypertext systems from each other. This paper offers such a common structure, showing the conceptual similarities between each of these systems and paradigms

    The Cyclical Behaviour of the IPO Market in Australia

    No full text
    Initial public offerings have been examined typically in the context of short-term and long-run stock price performance of individual issues. In this paper, the aggregate market for IPOs is examined. There has been some prior suggestion that the IPO market exhibits cyclical patterns which are characterised by a high volume of new issues and substantial underpricing, such that 'hot issue periods' exist. This paper tests for the existence of such periods in the Australian market using a Markov regime-switching model on a variety of constructed IPO activity measures. The results demonstrate that hot periods do exist but that they do not possess homogeneous features. A number of distinguishing features are also identified between industrial and resource sector IPOs. Further, a lead-lag relationship is identified for the industrial sector such that underpricing leads IPO volume for up to six months. The paper offers explanations for these findings that appear related to general stock market conditions and regulatory features

    Optimized reprocessing of documents using stored processor state

    Get PDF
    Variable Data Printing (VDP) allows customised versions of material such as advertising flyers to be readily produced. However, VDP is often extremely demanding of computing resources because, even when much of the material stays invariant from one document instance to the next, it is often simpler to re-evaluate the page completely rather than identifying just the portions that vary. In this paper we explore, in an XML/XSLT/SVG workflow and in an editing context, the reduction of the processing burden that can be realised by selectively reprocessing only the variant parts of the document. We introduce a method of partial re-evaluation that relies on re-engineering an existing XSLT parser to handle, at each XML tree node, both the storage and restoration of state for the underlying document processing framework. Quantitative results are presented for the magnitude of the speed-ups that can be achieved. We also consider how changes made through an appearance-based interactive editing scheme for VDP documents can be automatically reflected in the document view via optimised XSLT re-evaluation of sub-trees that are affected either by the changed script or by altered data

    Tracking sub-page components in document workflows

    Get PDF
    Documents go through numerous transformations and intermediate formats as they are processed from abstract markup into final printable form. This notion of a document workflow is well established but it is common to find that ideas about document components, which might exist in the source code for the document, become completely lost within an amorphous, unstructured, page of PDF prior to being rendered. Given the importance of a component-based approach in Variable Data Printing (VDP) we have developed a collection of tools that allow information about the various transformations to be embedded at each stage in the workflow, together with a visualization tool that uses this embedded information to display the relationships between the various intermediate documents. In this paper, we demonstrate these tools in the context of an example document workflow but the techniques described are widely applicable and would be easily adaptable to other workflows and for use in teaching tools to illustrate document component and VDP concepts

    Optimized reprocessing of documents using stored processor state

    Get PDF
    Variable Data Printing (VDP) allows customised versions of material such as advertising flyers to be readily produced. However, VDP is often extremely demanding of computing resources because, even when much of the material stays invariant from one document instance to the next, it is often simpler to re-evaluate the page completely rather than identifying just the portions that vary. In this paper we explore, in an XML/XSLT/SVG workflow and in an editing context, the reduction of the processing burden that can be realised by selectively reprocessing only the variant parts of the document. We introduce a method of partial re-evaluation that relies on re-engineering an existing XSLT parser to handle, at each XML tree node, both the storage and restoration of state for the underlying document processing framework. Quantitative results are presented for the magnitude of the speed-ups that can be achieved. We also consider how changes made through an appearance-based interactive editing scheme for VDP documents can be automatically reflected in the document view via optimised XSLT re-evaluation of sub-trees that are affected either by the changed script or by altered data

    Enhancing the searchability of page-image PDF documents using an aligned hidden layer from a truth text

    Get PDF
    The search accuracy achieved in a PDF image-plus-hidden- text (PDF-IT) document depends upon the accuracy of the optical character recognition (OCR) process that produced the searchable hidden text layer. In many cases recognising words in a blurred area of a PDF page image may exceed the capabilities of an OCR engine. This paper describes a project to replace an inadequate hidden textual layer of a PDF-IT file with a more accurate hidden layer produced from a `truth text'. The alignment of the truth text with the image is guided by using OCR- provided page-image co-ordinates, for those glyphs that are correctly recognised, as a set of fixed location points between which other truth-text words can be inserted and aligned with blurred glyphs in the image. Results are presented to show the much enhanced searchability of this new file when compared to that of the original file, which had an OCR-produced hidden layer with no truth-text enhancement
    • …
    corecore