16,582 research outputs found

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    Digital curation and the cloud

    Get PDF
    Digital curation involves a wide range of activities, many of which could benefit from cloud deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks which benefit from the ability to rapidly provision resources to day-to-day collaborative activities which can be facilitated by networked cloud services. Associated benefits are offset by risks such as loss of data or service level, legal and governance incompatibilities and transfer bottlenecks. There is considerable variability across both risks and benefits according to the service and deployment models being adopted and the context in which activities are performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g., private cloud models, but this is typically at the expense of benefits such as resource elasticity and economies of scale. Infrastructure as a Service model may provide a basis on which more specialised software services may be provided. There is considerable work to be done in helping institutions understand the cloud and its associated costs, risks and benefits, and how these compare to their current working methods, in order that the most beneficial uses of cloud technologies may be identified. Specific proposals, echoing recent work coordinated by EPSRC and JISC are the development of advisory, costing and brokering services to facilitate appropriate cloud deployments, the exploration of opportunities for certifying or accrediting cloud preservation providers, and the targeted publicity of outputs from pilot studies to the full range of stakeholders within the curation lifecycle, including data creators and owners, repositories, institutional IT support professionals and senior manager

    Critical success factors for e-tendering implementation in construction collaborative environments : people and process issues

    Get PDF
    The construction industry is increasingly engulfed by globalisation where clients, business partners and customers are found in virtually every corner of the world. Communicating, reaching and supporting them are no longer optional but are imperative for continued business growth and success. A key component of enterprise communication reach is collaborative environments (for the construction industry) which allows customers, suppliers, partners and other project team members secure access to project information, products or services they need at any given moment. Implementation of the stated critical success factors of the project is essential to ensure optimal performance and benefits from the system to all parties involved. This paper presents critical success factors for the implementation of e-tendering in collaborative environments with particular considerations given to the people issues and process factors

    The imperfect hiding : some introductory concepts and preliminary issues on modularity

    Get PDF
    In this work we present a critical assessment of some problems and open questions on the debated notion of modularity. Modularity is greatly in fashion nowadays, being often proposed as the new approach to complex artefact production that enables to combine fast innovation pace, enhanced product variety and reduced need for co-ordination. In line with recent critical assessments of the managerial literature on modularity, we sustain that modularity is only one among several arrangements to cope with the complexity inherent in most high-technology artefact production, and by no means the best one. We first discuss relations between modularity and the broader (and much older within economics) notion of division of labour. Then we sustain that a modular approach to labour division aimed at eliminating technological interdependencies between components or phases of a complex production process may have, as a by-product, the creation of other types of interdependencies which may subsequently result in inefficiencies of various types. Hence, the choice of a modular design strategy implies the resolution of various tradeoffs. Depending on how such tradeoffs are solved, different organisational arrangements may be created to cope with ‘residual’ interdependencies. Hence, there is no need to postulate a perfect isomorphism, as some recent literature has proposed, between modularity at the product level and modularity at the organisational level

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation

    Get PDF
    Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected. Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.

    Initiating organizational memories using ontology network analysis

    Get PDF
    One of the important problems in organizational memories is their initial set-up. It is difficult to choose the right information to include in an organizational memory, and the right information is also a prerequisite for maximizing the uptake and relevance of the memory content. To tackle this problem, most developers adopt heavy-weight solutions and rely on a faithful continuous interaction with users to create and improve its content. In this paper, we explore the use of an automatic, light-weight solution, drawn from the underlying ingredients of an organizational memory: ontologies. We have developed an ontology-based network analysis method which we applied to tackle the problem of identifying communities of practice in an organization. We use ontology-based network analysis as a means to provide content automatically for the initial set up of an organizational memory
    corecore