30,058 research outputs found

    Interoperable Systems: an introduction

    Get PDF
    This short chapter introduces interoperable systems and attempts to distinguish the principal research strands in this area. It is not intended as a review. Significant review material is integrated with each of the succeeding chapters. It is rather intended to whet the appetite for what follows and to provide some initial conceptual orientation. This book concerns the architecture, modelling and management of interoperable computing systems. Our collective research agenda addresses all aspects of interoperable systems development, including the business and industry requirements and environments for distributed information services

    EDOC: meeting the challenges of enterprise computing

    Get PDF
    An increasing demand for interoperable applications exists, sparking the real-time exchange of data across borders, applications, and IT platforms. To perform these tasks, enterprise computing now encompasses a new class of groundbreaking technologies such as Web services and service-oriented architecture (SOA); business process integration and management; and middleware support, like that for utility, grid, peer-to-peer, and autonomic computing. Enterprise computing also influences the processes for business modeling, consulting, and service delivery; it affects the design, development, and deployment of software architecture, as well as the monitoring and management of such architecture. As enterprises demand increasing levels of networked information and services to carry out business processes, IT professionals need conferences like EDOC to discuss emerging technologies and issues in enterprise computing. For these reasons, what started out as the Enterprise Distributed Object Computing (EDOC) conference has come to encompass much more than just distributed objects. So this event now used the name International EDOC Enterprise Computing Conference, to recognize this broader scope yet also retain the initial conference's name recognition

    Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production

    Full text link
    For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tested a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, 3 figures, pdf. PSN TUCP01

    KALwEN: a new practical and interoperable key management scheme for body sensor networks

    Get PDF
    Key management is the pillar of a security architecture. Body sensor networks (BSNs) pose several challenges–some inherited from wireless sensor networks (WSNs), some unique to themselves–that require a new key management scheme to be tailor-made. The challenge is taken on, and the result is KALwEN, a new parameterized key management scheme that combines the best-suited cryptographic techniques in a seamless framework. KALwEN is user-friendly in the sense that it requires no expert knowledge of a user, and instead only requires a user to follow a simple set of instructions when bootstrapping or extending a network. One of KALwEN's key features is that it allows sensor devices from different manufacturers, which expectedly do not have any pre-shared secret, to establish secure communications with each other. KALwEN is decentralized, such that it does not rely on the availability of a local processing unit (LPU). KALwEN supports secure global broadcast, local broadcast, and local (neighbor-to-neighbor) unicast, while preserving past key secrecy and future key secrecy (FKS). The fact that the cryptographic protocols of KALwEN have been formally verified also makes a convincing case. With both formal verification and experimental evaluation, our results should appeal to theorists and practitioners alike

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    Towards an interoperable healthcare information infrastructure - working from the bottom up

    Get PDF
    Historically, the healthcare system has not made effective use of information technology. On the face of things, it would seem to provide a natural and richly varied domain in which to target benefit from IT solutions. But history shows that it is one of the most difficult domains in which to bring them to fruition. This paper provides an overview of the changing context and information requirements of healthcare that help to explain these characteristics.First and foremost, the disciplines and professions that healthcare encompasses have immense complexity and diversity to deal with, in structuring knowledge about what medicine and healthcare are, how they function, and what differentiates good practice and good performance. The need to maintain macro-economic stability of the health service, faced with this and many other uncertainties, means that management bottom lines predominate over choices and decisions that have to be made within everyday individual patient services. Individual practice and care, the bedrock of healthcare, is, for this and other reasons, more and more subject to professional and managerial control and regulation.One characteristic of organisations shown to be good at making effective use of IT is their capacity to devolve decisions within the organisation to where they can be best made, for the purpose of meeting their customers' needs. IT should, in this context, contribute as an enabler and not as an enforcer of good information services. The information infrastructure must work effectively, both top down and bottom up, to accommodate these countervailing pressures. This issue is explored in the context of infrastructure to support electronic health records.Because of the diverse and changing requirements of the huge healthcare sector, and the need to sustain health records over many decades, standardised systems must concentrate on doing the easier things well and as simply as possible, while accommodating immense diversity of requirements and practice. The manner in which the healthcare information infrastructure can be formulated and implemented to meet useful practical goals is explored, in the context of two case studies of research in CHIME at UCL and their user communities.Healthcare has severe problems both as a provider of information and as a purchaser of information systems. This has an impact on both its customer and its supplier relationships. Healthcare needs to become a better purchaser, more aware and realistic about what technology can and cannot do and where research is needed. Industry needs a greater awareness of the complexity of the healthcare domain, and the subtle ways in which information is part of the basic contract between healthcare professionals and patients, and the trust and understanding that must exist between them. It is an ideal domain for deeper collaboration between academic institutions and industry

    Designing Web-enabled services to provide damage estimation maps caused by natural hazards

    Get PDF
    The availability of building stock inventory data and demographic information is an important requirement for risk assessment studies when attempting to predict and estimate losses due to natural hazards such as earthquakes, storms, floods or tsunamis. The better this information is provided, the more accurate are predictions on damage to structures and lifelines and the better can expected impacts on the population be estimated. When a disaster strikes, a map is often one of the first requirements for answering questions related to location, casualties and damage zones caused by the event. Maps of appropriate scale that represent relative and absolute damage distributions may be of great importance for rescuing lives and properties, and for providing relief. However, this type of maps is often difficult to obtain during the first hours or even days after the occurrence of a natural disaster. The Open Geospatial Consortium Web Services (OWS) Specifications enable access to datasets and services using shared, distributed and interoperable environments through web-enabled services. In this paper we propose the use of OWS in view of these advantages as a possible solution for issues related to suitable dataset acquisition for risk assessment studies. The design of web-enabled services was carried out using the municipality of Managua (Nicaragua) and the development of damage and loss estimation maps caused by earthquakes as a first case study. Four organizations located in different places are involved in this proposal and connected through web services, each one with a specific role
    corecore