36,821 research outputs found

    Demo: An Interoperability Development and Performance Diagnosis Environment

    Get PDF
    Interoperability is key to widespread adoption of sensor network technology, but interoperable systems have traditionally been difficult to develop and test. We demonstrate an interoperable system development and performance diagnosis environment in which different systems, different software, and different hardware can be simulated in a single network configuration. This allows both development, verification, and performance diagnosis of interoperable systems. Estimating the performance is important since even when systems interoperate, the performance can be sub-optimal, as shown in our companion paper that has been conditionally accepted for SenSys 2011

    A Survey of Network Optimization Techniques for Traffic Engineering

    Get PDF
    TCP/IP represents the reference standard for the implementation of interoperable communication networks. Nevertheless, the layering principle at the basis of interoperability severely limits the performance of data communication networks, thus requiring proper configuration and management in order to provide effective management of traffic flows. This paper presents a brief survey related to network optimization using Traffic Engineering algorithms, aiming at providing additional insight to the different alternatives available in the scientific literature

    Representing simmodel in the web ontology language

    Get PDF
    Many building energy performance (BEP) simulation tools, such as EnergyPlus and DOE-2, use custom schema definitions (IDD and BDL respectively) as opposed to standardised schema definitions (defined in XSD, EXPRESS, and so forth). A Simulation Domain Model (SimModel) was therefore proposed earlier, representative for a new interoperable XML-based data model for the building simulation domain. Its ontology aims at moving away from tool-specific, non-standard nomenclature by implementing an industry-validated terminology aligned with the Industry Foundation Classes (IFC). In this paper, we document our ongoing efforts to make building simulation data more interoperable with other building data. In order to be able to better integrate SimModel information with other building information, we have aimed at representing this information in the Resource Description Framework (RDF). A conversion service has been built that is able to parse the SimModel ontology in the form of XSD schemas and output a SimModel ontology in OWL. In this article, we document this effort and give an indication of what the resulting SimModel ontology in OWL can be used for

    Interoperable Systems: an introduction

    Get PDF
    This short chapter introduces interoperable systems and attempts to distinguish the principal research strands in this area. It is not intended as a review. Significant review material is integrated with each of the succeeding chapters. It is rather intended to whet the appetite for what follows and to provide some initial conceptual orientation. This book concerns the architecture, modelling and management of interoperable computing systems. Our collective research agenda addresses all aspects of interoperable systems development, including the business and industry requirements and environments for distributed information services

    KALwEN: a new practical and interoperable key management scheme for body sensor networks

    Get PDF
    Key management is the pillar of a security architecture. Body sensor networks (BSNs) pose several challenges–some inherited from wireless sensor networks (WSNs), some unique to themselves–that require a new key management scheme to be tailor-made. The challenge is taken on, and the result is KALwEN, a new parameterized key management scheme that combines the best-suited cryptographic techniques in a seamless framework. KALwEN is user-friendly in the sense that it requires no expert knowledge of a user, and instead only requires a user to follow a simple set of instructions when bootstrapping or extending a network. One of KALwEN's key features is that it allows sensor devices from different manufacturers, which expectedly do not have any pre-shared secret, to establish secure communications with each other. KALwEN is decentralized, such that it does not rely on the availability of a local processing unit (LPU). KALwEN supports secure global broadcast, local broadcast, and local (neighbor-to-neighbor) unicast, while preserving past key secrecy and future key secrecy (FKS). The fact that the cryptographic protocols of KALwEN have been formally verified also makes a convincing case. With both formal verification and experimental evaluation, our results should appeal to theorists and practitioners alike

    Impact Evaluation of Interoperability Decision Variables on P2P Collaboration Performances

    Get PDF
    This article deals with the impact evaluation of interoperability decision variables on performance indicators of business processes. The case of partner companies is studied to show the interest of an Interoperability Service Utility (ISU) on business processes in a peer to peer (P2P) collaboration. Information described in the format and the ontology of a broadcasting entity is transformed by ISU into information with the format and the ontology of the receiving entity depending on the available resources of interoperation. These resources can be human operators with defined skill level or software modules of transformation in predefined languages. A design methodology of a global simulation model for estimating the impact of interoperability decision variables on performance indicators of business processes is proposed. Its implementation in an industrial case of collaboration shows its efficiency and its interest to motivate an investment in the technologies of enterprise interoperability

    IAMS framework: a new framework for acceptable user experiences for integrating physical and virtual identity access management systems

    No full text
    The modern world is populated with so many virtual and physical Identity Access Management Systems (IAMSs) that individuals are required to maintain numerous passwords and login credentials. The tedious task of remembering multiple login credentials can be minimised through the utilisation of an innovative approach of single sign-in mechanisms. During recent times, several systems have been developed to provide physical and virtual identity management systems; however, most have not been very successful. Many of the available systems do not provide the feature of virtual access on mobile devices via the internet; this proves to be a limiting factor in the usage of the systems. Physical spaces, such as offices and government entities, are also favourable places for the deployment of interoperable physical and virtual identity management systems, although this area has only been explored to a minimal level. Alongside increasing the level of awareness for the need to deploy interoperable physical and virtual identity management systems, this paper addresses the immediate need to establish clear standards and guidelines for successful integration of the two medium

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    Towards an interoperable healthcare information infrastructure - working from the bottom up

    Get PDF
    Historically, the healthcare system has not made effective use of information technology. On the face of things, it would seem to provide a natural and richly varied domain in which to target benefit from IT solutions. But history shows that it is one of the most difficult domains in which to bring them to fruition. This paper provides an overview of the changing context and information requirements of healthcare that help to explain these characteristics.First and foremost, the disciplines and professions that healthcare encompasses have immense complexity and diversity to deal with, in structuring knowledge about what medicine and healthcare are, how they function, and what differentiates good practice and good performance. The need to maintain macro-economic stability of the health service, faced with this and many other uncertainties, means that management bottom lines predominate over choices and decisions that have to be made within everyday individual patient services. Individual practice and care, the bedrock of healthcare, is, for this and other reasons, more and more subject to professional and managerial control and regulation.One characteristic of organisations shown to be good at making effective use of IT is their capacity to devolve decisions within the organisation to where they can be best made, for the purpose of meeting their customers' needs. IT should, in this context, contribute as an enabler and not as an enforcer of good information services. The information infrastructure must work effectively, both top down and bottom up, to accommodate these countervailing pressures. This issue is explored in the context of infrastructure to support electronic health records.Because of the diverse and changing requirements of the huge healthcare sector, and the need to sustain health records over many decades, standardised systems must concentrate on doing the easier things well and as simply as possible, while accommodating immense diversity of requirements and practice. The manner in which the healthcare information infrastructure can be formulated and implemented to meet useful practical goals is explored, in the context of two case studies of research in CHIME at UCL and their user communities.Healthcare has severe problems both as a provider of information and as a purchaser of information systems. This has an impact on both its customer and its supplier relationships. Healthcare needs to become a better purchaser, more aware and realistic about what technology can and cannot do and where research is needed. Industry needs a greater awareness of the complexity of the healthcare domain, and the subtle ways in which information is part of the basic contract between healthcare professionals and patients, and the trust and understanding that must exist between them. It is an ideal domain for deeper collaboration between academic institutions and industry
    corecore