4,324 research outputs found

    A design view of capability

    Get PDF
    In order to optimise resource deployment in a rapid changing operational environment, capability has received increasing concerns in terms of maximising the utilisation of resources. As a result of such extant research, different domains were seen to endow different meanings to capability, indicating a lack of common understanding of the true nature of capability. This paper presents a design view of capability from design artefact knowledge perspective. Capability is defined as an intrinsic quality of an entity closely related to artefact behavioural and structural knowledge. Design artefact knowledge was categorised across expected, instantiated, and interpreted artefact knowledge spaces (ES, IsS, and ItS). Accordingly, it suggests that three types of capability exist in the three spaces, which can be used in employing resources. Moreover, Network Enabled Capability (NEC), the capability of a set of linked resources within a specific environment is discussed, with an example of how network resources are deployed in a Virtual Integration Platform (VIP)

    Integrated Model-Centric Decision Support System for Process Industries

    Get PDF
    To bring the advances in modeling, simulation and optimization environments (MSOEs), open-software architectures, and information technology closer to process industries, novel mechanisms and advanced software tools must be devised to simplify the definition of complex model-based problems. Synergistic interactions between complementary model-based software tools must be refined to unlock the potential of model-centric technologies in industries. This dissertation presents the conceptual definition of a single and consistent framework for integrated process decision support (IMCPSS) to facilitate the realistic formulation of related model-based engineering problems. Through the integration of data management, simulation, parameter estimation, data reconciliation, and optimization methods, this framework seeks to extend the viability of model-centric technologies within the industrial workplace. The main contribution is the conceptual definition and implementation of mechanisms to ease the formulation of large-scale data-driven/model-based problems: data model definitions (DMDs), problem formulation objects (PFOs) and process data objects (PDOs). These mechanisms allow the definition of problems in terms of physical variables; to embed plant data seamlessly into model-based problems; and to permit data transfer, re-usability, and synergy among different activities. A second contribution is the design and implementation of the problem definition environment (PDE). The PDE is a robust object-oriented software component that coordinates the problem formulation and the interaction between activities by means of a user-friendly interface. The PDE administers information contained in DMD and coordinates the creation of PFOs and PIFs. Last, this dissertation contributes a systematic integration of data pre-processing and conditioning techniques and MSOEs. The proposed process data management system (pDMS) implements such methodologies. All required manipulations are supervised by the PDE, which represents an important advantage when dealing with high volumes of data. The IMCPSS responds to the need for software tools centered in process engineers for which the complexity of using current modeling environments is a barrier for broader application of model-based activities. Consequently, the IMCPSS represents a valuable tool for process industries, as the facilitation of problem formulation is translated into incorporation of plant data in less error-prone manner, maximization of time dedicated to the analysis of processes, and exploitation of synergy among activities based on process models

    The Entity Registry System: Implementing 5-Star Linked Data Without the Web

    Full text link
    Linked Data applications often assume that connectivity to data repositories and entity resolution services are always available. This may not be a valid assumption in many cases. Indeed, there are about 4.5 billion people in the world who have no or limited Web access. Many data-driven applications may have a critical impact on the life of those people, but are inaccessible to those populations due to the architecture of today's data registries. In this paper, we propose and evaluate a new open-source system that can be used as a general-purpose entity registry suitable for deployment in poorly-connected or ad-hoc environments.Comment: 16 pages, authors are listed in alphabetical orde

    On Demand Quality of web services using Ranking by multi criteria

    Get PDF
    In the Web database scenario, the records to match are highly query-dependent, since they can only be obtained through online queries. Moreover, they are only a partial and biased portion of all the data in the source Web databases. Consequently, hand-coding or offline-learning approaches are not appropriate for two reasons. First, the full data set is not available beforehand, and therefore, good representative data for training are hard to obtain. Second, and most importantly, even if good representative data are found and labeled for learning, the rules learned on the representatives of a full data set may not work well on a partial and biased part of that data set. Keywords: SOA, Web Services, Network

    Co-evolution of RDF Datasets

    Get PDF
    Linking Data initiatives have fostered the publication of large number of RDF datasets in the Linked Open Data (LOD) cloud, as well as the development of query processing infrastructures to access these data in a federated fashion. However, different experimental studies have shown that availability of LOD datasets cannot be always ensured, being RDF data replication required for envisioning reliable federated query frameworks. Albeit enhancing data availability, RDF data replication requires synchronization and conflict resolution when replicas and source datasets are allowed to change data over time, i.e., co-evolution management needs to be provided to ensure consistency. In this paper, we tackle the problem of RDF data co-evolution and devise an approach for conflict resolution during co-evolution of RDF datasets. Our proposed approach is property-oriented and allows for exploiting semantics about RDF properties during co-evolution management. The quality of our approach is empirically evaluated in different scenarios on the DBpedia-live dataset. Experimental results suggest that proposed proposed techniques have a positive impact on the quality of data in source datasets and replicas.Comment: 18 pages, 4 figures, Accepted in ICWE, 201

    High-speed, in-band performance measurement instrumentation for next generation IP networks

    Get PDF
    Facilitating always-on instrumentation of Internet traffic for the purposes of performance measurement is crucial in order to enable accountability of resource usage and automated network control, management and optimisation. This has proven infeasible to date due to the lack of native measurement mechanisms that can form an integral part of the network‟s main forwarding operation. However, Internet Protocol version 6 (IPv6) specification enables the efficient encoding and processing of optional per-packet information as a native part of the network layer, and this constitutes a strong reason for IPv6 to be adopted as the ubiquitous next generation Internet transport. In this paper we present a very high-speed hardware implementation of in-line measurement, a truly native traffic instrumentation mechanism for the next generation Internet, which facilitates performance measurement of the actual data-carrying traffic at small timescales between two points in the network. This system is designed to operate as part of the routers' fast path and to incur an absolutely minimal impact on the network operation even while instrumenting traffic between the edges of very high capacity links. Our results show that the implementation can be easily accommodated by current FPGA technology, and real Internet traffic traces verify that the overhead incurred by instrumenting every packet over a 10 Gb/s operational backbone link carrying a typical workload is indeed negligible

    The role of Intellectual Capital Reporting (ICR) in organisational transformation: A discursive practice perspective

    Get PDF
    Intellectual Capital Reporting (ICR) has garnered increasing attention as a new accounting technology that can engender significant organisational changes. However, when ICR was first recognised as a management fashion, the intended change it heralded in stable environments was criticised for having limited impact on the state of practice. Conceiving ICR through a lens predicated on the notion of discursive practice, we argue that ICR can enable substantive change in emergent conditions. We empirically demonstrate this process by following the implementation of ICR in one organisation through interviews, documents and observations over 30 months. The qualitative analysis of the data corpus shows how situated change, subtle but no less significant, can take place in the name of intellectual capital as actors appropriate ICR into their everyday work practices while improvising variations to accommodate different logics of action. The paper opens up a new avenue to examine the specific roles of ICR in relation to the types of change enacted. It thus demonstrates when and how ICR may transcend a mere management fashion and the intended change it sets in motion through altering organisational actors’ ways of thinking and doing within the confines of their organisation

    Integration of Linked Open Data Authorities with OpenRefine : A Methodology for Libraries

    Get PDF
    The primary purpose of this paper is to explore the integration process of linked open data authority with OpenRefine for easy access of related metadata towards the creation of data cleaning and updating in a modern integrated library system. The integration process and methods are based on the API of reconciliation repositories collected from web resources. This integrated framework will be designed and developed on OpenRefine techniques and components based on RDF, CSV, SPARQL, and Turtle scripts. This integrated framework is based on JAVA and Apache Web Server for running the OpenRefine on the Ubuntu Platform. This integrated framework has been explored the data cleaning and import of bibliographic metadata from multiple linking authorities such as Open Library, ORCID, VIAF, VIAF BNF, Library of Congress Authorities data, and Wikidata. These are the essential findings of this study for creating a new interface to library professionals and advanced users. The library professionals are very much benefitted by using this system and services towards easy import and access of related linking resources from the Wikidata. Aside from these, it also explores the other facilities for data cleaning and updating the information from multiple scripts and URLs in the Web environment. It is possible to fetch related linking authorities for enhancing the advanced level services in a modern library management system. So, library carpentry and data carpentry are essential concepts for making a dynamic integrated interface for library professionals
    • …
    corecore