31 research outputs found

    The construction and use of a meta-assembler

    No full text
    A Meta-Assembler is a generalised assembly program which is capable,in principle, of translating any given source assembly language text into binary output appropriate for loading and execution on any corresponding target machine. It does this by combining a nucleus of conventional assembly facilities with additional special, facilities for defining source and target languages and their correspondence. There follows a practical investigation into this approach to assembly, consisting of an introduction to the subject, including an outline of typical meta-assembler characteristics, and an evaluation of performance of a prototype system in a number of different types of assembly application. The implementation of, the system in question, SOFAST, at Southampton University is described separately, and some consideration given to, its future portability by means of self application. The main conclusion reached is that a meta-assembler is potentially a useful tool for assembly language translation work in situations where alternative software does not exist or is inferior to itself, with particular value as the base o£ a mobile programming system.</p

    Market-based agent allocation in global information systems

    No full text

    Extending OMT to support bottom-up design modelling in a heterogeneous distributed database environment

    No full text
    We present an extension to the object model of OMT to cope with bottom-up database design. By investigation, it was discovered that OMT as it stands is inadequate to capture some real world semantic and structural information needed to perform schema integration for pre-existing databases in a heterogeneous distributed database environment. Therefore the proposed extension, called Integrated OMT (IOMT), was formally defined and an appropriate extended graphical notation was produced as a part of our work. This extended form was implemented and applied effectively using a tool which we call the schema meta-integration/visualisation system (SMIS/SMVS)

    Optimizing fragment constraints—A performance evaluation

    No full text
    A principal problem with integrity constraints' use for monitoring dynamically changing database integrity is evaluation cost. This cost associated with performance of checking mechanisms is the main quantitative measure which must be supervised carefully. Based on literature, evaluating an integrity constraint cost includes these main components: (i) data amount accessed; (ii) data amount transferred across network; and (iii) number of sites involved. In distributed databases, where many networked sites are involved, not only amount of data accessed must be minimized but also amount of data transferred across network and number of sites involved. In [Ibrahim H, Gray WA, Fiddian NJ. SICSDD: Techniques and implementation. In Proceedings of Constraint Databases and Applications, Second International Workshop on Constraint Database Systems (CDB'97), Delphi, Greece, January 1997, pp 187-207], we introduced an integrity constraint subsystem for a relational distributed database. The subsystem consists of several techniques necessary for efficient constraint checking, particularly in a distributed environment where data distribution is transparent to application domain. Here, we show how these techniques effectively reduce constraint checking cost in such a distributed environment. This is done by analyzing and comparing generated simplified forms to respective initial constraints respecting amount of data to be accessed, amount of data transferred across network, and number of sites involved during evaluation of constraints or simplified forms. Generally, our strategy reduces data amount needing to be accessed since only fragments of relations subject to update are evaluated. Data amount transferred across network and number of sites that are involved are minimized by evaluating simplified forms at target site, i.e., site where update is performed

    Personalizing web information for patients: linking patient medical data with the web via a patient personal knowledge base

    No full text
    This paper describes ongoing study that examines problems with existing patient health information sources and investigates an approach for linking (i.e. integrating) data from a patientĂ¢â‚¬â„¢s medical record(s) with relevant health information on the web. The aim is to provide patients with simplified, customized and controlled access to web information. Data from patient medical records are extracted and linked with relevant health information on the web through a web search service. These are made available to patients through a web portal that we refer to as the patient knowledge base (PatientKB). Our integration approach utilizes term semantics (i.e. meaning) to enrich the web search and simplify medical terms for patients. In the current implementation, patients have guided, secure and relatively customized access to basic and relevant web information on their diagnoses. Future implementation will attempt to achieve further customization, extensibility and safety features. This paper investigates how ideas presented in an earlier study can be implemented

    Using quality criteria to assist in information searching

    No full text
    One of the challenges facing todayĂ¢â‚¬â„¢s information consumer is how to find information that meets their personal needs, within an acceptable time frame, and at an appropriate level of quality. One potential method for assisting these consumers is to employ a personalisable, explicit definition of quality to focus information search results. In this paper we discuss the feasibility of this approach by demonstrating how a consumer-refined definition of quality can be used to drive an information search, initially within a closed-world environment. This paves the way for further research, transferring lessons learned and techniques developed to an open, heterogeneous environment

    Domain-Specific Metadata a Key for Building Semantically-Rich Schema Models

    No full text
    Providing integrated access to data from many diverse and heterogeneous Information Servers (ISs) requires deep knowledge, not only about the structure of the data represented at each server, but also about the commonly occurring differences in the intended semantics of this data. Unfortunately, very often there is a lack of such knowledge and the local schemas, being semantically weak as a consequence of the limited expressiveness of traditional data models, do not help the acquisition of this knowledge. In this paper we propose domain-specific metadata as a key for upgrading the semantic level of the local ISs to which an integration system requires access, and for building semantically-rich schema models. We provide a framework for enriching the individual IS schemas with semantic domain knowledge to make explicit the assumptions which may have been made by their designers, are of interest to the integrator (interpreter or user), and which may not be captured using the DDL language of their host servers. The enriched schema semantic knowledge is organised by levels of schematic granularity: database, schema, attribute and instance levels giving rise to semantically-rich schema models

    Investigating and utilising patient information needs to focus internet searching for cancer patients

    No full text
    Effective educational patient information systems need to utilise the patient’s individual information needs. This study investigates and implements a patient Internet search facility in order to focus the search information topic and level of quality of information to a patient’s needs. In this study, a patient’s medical data is employed to focus the search information topic, whereas a wide range of search tools representing various quality initiatives are incorporated to offer a variant level of quality for patients to utilise. These search requirements are investigated using information from patient medical records, interviews with patient information specialists and nurses, conferences, and relevant literature. The search interface demonstrates a simplified and flexible Internet search facility, yielding results focused to a patient’s individual condition and preferences

    Semantically Rich Materialisation Rules for Integrating Heterogeneous Databases

    No full text
    The need for accessing independently developed database systems using a unified or multiple global view(s) has been well recognised. This paper addresses the problem of redundancy of object retrieval in a multidatabase setting. We present the materialisation rules we have used for supporting data integration in a heterogeneous database environment. The materialisation rules are capable of directing the global query processor to combine data from different databases. Also, these rules are able to reconcile database heterogeneity that may be found due to independent database design
    corecore