96 research outputs found

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite

    LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    Full text link

    Clinical foundations and information architecture for the implementation of a federated health record service

    Get PDF
    Clinical care increasingly requires healthcare professionals to access patient record information that may be distributed across multiple sites, held in a variety of paper and electronic formats, and represented as mixtures of narrative, structured, coded and multi-media entries. A longitudinal person-centred electronic health record (EHR) is a much-anticipated solution to this problem, but its realisation is proving to be a long and complex journey. This Thesis explores the history and evolution of clinical information systems, and establishes a set of clinical and ethico-legal requirements for a generic EHR server. A federation approach (FHR) to harmonising distributed heterogeneous electronic clinical databases is advocated as the basis for meeting these requirements. A set of information models and middleware services, needed to implement a Federated Health Record server, are then described, thereby supporting access by clinical applications to a distributed set of feeder systems holding patient record information. The overall information architecture thus defined provides a generic means of combining such feeder system data to create a virtual electronic health record. Active collaboration in a wide range of clinical contexts, across the whole of Europe, has been central to the evolution of the approach taken. A federated health record server based on this architecture has been implemented by the author and colleagues and deployed in a live clinical environment in the Department of Cardiovascular Medicine at the Whittington Hospital in North London. This implementation experience has fed back into the conceptual development of the approach and has provided "proof-of-concept" verification of its completeness and practical utility. This research has benefited from collaboration with a wide range of healthcare sites, informatics organisations and industry across Europe though several EU Health Telematics projects: GEHR, Synapses, EHCR-SupA, SynEx, Medicate and 6WINIT. The information models published here have been placed in the public domain and have substantially contributed to two generations of CEN health informatics standards, including CEN TC/251 ENV 13606

    Middleware support for non-repudiable business-to-business interactions

    Get PDF
    The wide variety of services and resources available over the Internet presents new opportunities for organisations to collaborate to reach common goals. For example, business partners wish to access each other’s services and share information along the supply chain in order to compete more successfully in the delivery of goods or services to the ultimate customer. This can lead to the investment of significant resources by business partners in the resulting collaboration. In the context of such high value business-to-business (B2B) interactions it is desirable to regulate (monitor and control) the behaviour of business partners to ensure that they comply with agreements that govern their interactions. Achieving this regulation is challenging because, while wishing to collaborate, organisations remain autonomous and may not unguardedly trust each other. Two aspects must be addressed: (i) the need for high-level mechanisms to encode agreements (contracts) between the interacting parties such that they can be used for run-time monitoring and enforcement, and (ii) systematic support to monitor a given interaction for conformance with contract and to ensure accountability. This dissertation concerns the latter aspect — the definition, design and implementation of underlying middleware support for the regulation of B2B interactions. To this end, two non-repudiation services are identified — non-repudiable service invocation and non-repudiable information sharing. A flexible nonrepudiation protocol execution framework supports the delivery of the identified services. It is shown how the services can be used to regulate B2B interactions. The non-repudiation services provide for the accountability of the actions of participants; including the acknowledgement of actions, their run-time validation with respect to application-level constraints and logging for audit. The framework is realised in the context of interactions with and between components of a J2EE application server platform. However, the design is sufficiently flexible to apply to other common middleware platforms.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Large Scale Distributed Testing for Fault Classification and Isolation

    Get PDF
    Developing confidence in the quality of software is an increasingly difficult problem. As the complexity and integration of software systems increases, the tools and techniques used to perform quality assurance (QA) tasks must evolve with them. To date, several quality assurance tools have been developed to help ensure of quality in modern software, but there are still several limitations to be overcome. Among the challenges faced by current QA tools are (1) increased use of distributed software solutions, (2) limited test resources and constrained time schedules and (3) difficult to replicate and possibly rarely occurring failures. While existing distributed continuous quality assurance (DCQA) tools and techniques, including our own Skoll project, begin to address these issues, new and novel approaches are needed to address these challenges. This dissertation explores three strategies to do this. First, I present an improved version of our Skoll distributed quality assurance system. Skoll provides a platform for executing sophisticated, long-running QA processes across a large number of distributed, heterogeneous computing nodes. This dissertation details changes to Skoll resulting in a more robust, configurable, and user-friendly implementation for both the client and server components. Additionally, this dissertation details infrastructure development done to support the evaluation of DCQA processes using Skoll -- specifically the design and deployment of a dedicated 120-node computing cluster for evaluating DCQA practices. The techniques and case studies presented in the latter parts of this work leveraged the improvements to Skoll as their testbed. Second, I present techniques for automatically classifying test execution outcomes based on an adaptive-sampling classification technique along with a case study on the Java Architecture for Bytecode Analysis (JABA) system. One common need for these techniques is the ability to distinguish test execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which conditions a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). In this work, I present an empirical study on JABA where we automatically classified execution data into passing and failing behaviors using adaptive association trees. Finally, I present a long-term case study of the highly-configurable MySQL open-source project. Exhaustive testing of real-world software systems can involve configuration spaces that are too large to test exhaustively, but that nonetheless contain subtle interactions that lead to failure-inducing system faults. In the literature covering arrays, in combination with classification techniques, have been used to effectively sample these large configuration spaces and to detect problematic configuration dependencies. Applying this approach in practice, however, is tricky because testing time and resource availability are unpredictable. Therefore we developed and evaluated an alternative approach that incrementally builds covering array schedules. This approach begins at a low strength, and then iteratively increases strength as resources allow reusing previous test results to avoid duplicated effort. The results are test schedules that allow for successful classification with fewer test executions and that require less test-subject specific information to develop

    An integrated product and process information modelling system for on-site construction

    Get PDF
    The inadequate infrastructure that exists for seamless project team communications has its roots in the problems arising from fragmentation, and the lack of effective co-ordination between stages of the construction process. The use of disparate computer-aided engineering (CAE) systems by most disciplines is one of the enduring legacies of this problem and makes information exchange between construction team members difficult and, in some cases, impossible. The importance of integrating modelling techniques with a view to creating an integrated product and process model that is applicable to all stages of a construction project's life cycle, is being recognised by the Construction Industry. However, improved methods are still needed to assist the developer in the definition of information model structures, and current modelling methods and standards are only able to provide limited assistance at various stages of the information modelling process. This research investigates the role of system integration by reviewing product and process information models, current modelling practices and modelling standards in the construction industry, and draws conclusions with similar practices from other industries, both in terms of product and process representation, and model content. It further reviews various application development tools and information system requirements to support a suitable integrated information structure, for developing an integrated product and process model for design and construction, based on concurrent engineering principles. The functional and information perspectives of the integrated model, which were represented using IDEFO and the unified modelling language (UML), provided the basis for developing a prototype hyper-integrated product and process information modelling system (HIPPY). Details of the integrated conceptual model's implementation, practical application of the prototype system, using house-building as an example, and evaluation by industry practitioners are also presented. It is concluded that the effective integration of product and process information models is a key component of the implementation of concurrent engineering in construction, and is a vital step towards providing richer information representation, better efficiency, and the flexibility to support life cycle information management during the construction stage of small to medium sized-building projects

    Software Engineering Laboratory Series: Proceedings of the Twenty-Second Annual Software Engineering Workshop

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document

    Effecting Data Quality Through Data Governance: a Case Study in the Financial Services Industry

    Get PDF
    One of the most significant challenges faced by senior management today is implementing a data governance program to ensure that data is an asset to an organization\u27s mission. New legislation aligned with continual regulatory oversight, increasing data volume growth, and the desire to improve data quality for decision making are driving forces behind data governance initiatives. Data governance involves reshaping existing processes and the way people view data along with the information technology required to create a consistent, secure and defined processes for handling the quality of an organization\u27s data. In examining attempts to move towards making data an asset in organizations, the term data governance helps to conceptualize the break with existing ad hoc, siloed and improper data management practices. This research considers a case study of large financial services company to examine data governance policies and procedures. It seeks to bring some information to bare on the drivers of data governance, the processes to ensure data quality, the technologies and people involved to aid in the processes as well as the use of data governance in decision making. This research also addresses some core questions surrounding data governance, such as the viability of a golden source record, ownership and responsibilities for data, and the optimum placement of a data governance department. The findings will provide a model for financial services companies hoping to take the initial steps towards better data quality and ultimately a data governance capability
    • …
    corecore