39,426 research outputs found
Recommended from our members
Exploiting tacit knowledge through knowledge management technologies
The purpose of this paper is to examine the contributions and suitability of the available knowledge management (KM) technologies, including the Web 2.0 for exploiting tacit knowledge. It proposes an integrated framework for extracting tacit knowledge in organisations, which includes Web 2.0 technologies, KM tools, organisational learning (OL) and Community of Practice (CoP). It reviews a comprehensive literature covering overview of KM theories, KM technologies and OL and identifies the current state of knowledge relating to tacit knowledge exploitation. The outcomes of the paper indicate that Internet and Web 2.0 technologies have stunning prospects for creating learning communities where tacit knowledge can be extracted from people. The author recommends that organisations should design procedures and embed them in their Web 2.0 collaborative platforms persuading employees to record their ideas and share them with other members. It is also recommended that no idea should be taken for granted in a learning community where tacit knowledge exploitation is pursued. It is envisaged that future research should adopt empirical approach involving Complex Adaptive Model for Tacit Knowledge Exploitation (CAMTaKE) and the Theory of Deferred Action in examining the effectiveness of KM technologies including Web 2.0 tools for tacit knowledge exploitation
Establishing Incentives and Changing Cultures to Support Data Access
This project was developed as a key component of the workplan of the Expert Advisory Group on Data Access (EAGDA).EAGDA wished to understand the factors that help and hinder individual researchers in making their data (both published and unpublished) available to other researchers, and to examine the potential need for new types of incentives to enable data access and sharing. This is a critical challenge in achieving the shared policy commitment of the four EAGDA funders to maximise the benefit derived from data outputs and the considerable investment they have made over recent years in supporting data sharing.In addition to a review of previous reports and other initiatives in this area, the work involved in-depth interviews with key stakeholders; two focus group discussions; and a web survey to which 35 responses were received from a broad range of researchers and data managers.Although based on a relatively modest number of responses and interviews, the findings closely mirrored those of previous work in this area. In particular there was a clear, overarching view that the research culture and environment is not perceived as providing sufficient support, nor adequate rewards for researchers who generate and share high-quality datasets
Developing front-end Web 2.0 technologies to access services, content and things in the future Internet
The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative
Practitioner requirements for integrated Knowledge-Based Engineering in Product Lifecycle Management.
The effective management of knowledge as capital is considered essential to the
success of engineering product/service systems. As Knowledge Management (KM) and
Product Lifecycle Management (PLM) practice gain industrial adoption, the
question of functional overlaps between both the approaches becomes evident.
This article explores the interoperability between PLM and Knowledge-Based
Engineering (KBE) as a strategy for engineering KM. The opinion of key KBE/PLM
practitioners are systematically captured and analysed. A set of ranked business
functionalities to be fulfiled by the KBE/PLM systems integration is elicited.
The article provides insights for the researchers and the practitioners playing
both the user and development roles on the future needs for knowledge systems
based on PLM
Investigating Decision Support Techniques for Automating Cloud Service Selection
The compass of Cloud infrastructure services advances steadily leaving users
in the agony of choice. To be able to select the best mix of service offering
from an abundance of possibilities, users must consider complex dependencies
and heterogeneous sets of criteria. Therefore, we present a PhD thesis proposal
on investigating an intelligent decision support system for selecting Cloud
based infrastructure services (e.g. storage, network, CPU).Comment: Accepted by IEEE Cloudcom 2012 - PhD consortium trac
Mining Knowledge in Astrophysical Massive Data Sets
Modern scientific data mainly consist of huge datasets gathered by a very
large number of techniques and stored in very diversified and often
incompatible data repositories. More in general, in the e-science environment,
it is considered as a critical and urgent requirement to integrate services
across distributed, heterogeneous, dynamic "virtual organizations" formed by
different resources within a single enterprise. In the last decade, Astronomy
has become an immensely data rich field due to the evolution of detectors
(plates to digital to mosaics), telescopes and space instruments. The Virtual
Observatory approach consists into the federation under common standards of all
astronomical archives available worldwide, as well as data analysis, data
mining and data exploration applications. The main drive behind such effort
being that once the infrastructure will be completed, it will allow a new type
of multi-wavelength, multi-epoch science which can only be barely imagined.
Data Mining, or Knowledge Discovery in Databases, while being the main
methodology to extract the scientific information contained in such MDS
(Massive Data Sets), poses crucial problems since it has to orchestrate complex
problems posed by transparent access to different computing environments,
scalability of algorithms, reusability of resources, etc. In the present paper
we summarize the present status of the MDS in the Virtual Observatory and what
is currently done and planned to bring advanced Data Mining methodologies in
the case of the DAME (DAta Mining & Exploration) project.Comment: Pages 845-849 1rs International Conference on Frontiers in
Diagnostics Technologie
Past, present and future of information and knowledge sharing in the construction industry: Towards semantic service-based e-construction
The paper reviews product data technology initiatives in the construction sector and provides a synthesis of related ICT industry needs. A comparison between (a) the data centric characteristics of Product Data Technology (PDT) and (b) ontology with a focus on semantics, is given, highlighting the pros and cons of each approach. The paper advocates the migration from data-centric application integration to ontology-based business process support, and proposes inter-enterprise collaboration architectures and frameworks based on semantic services, underpinned by ontology-based knowledge structures. The paper discusses the main reasons behind the low industry take up of product data technology, and proposes a preliminary roadmap for the wide industry diffusion of the proposed approach. In this respect, the paper stresses the value of adopting alliance-based modes of operation
Designing Traceability into Big Data Systems
Providing an appropriate level of accessibility and traceability to data or
process elements (so-called Items) in large volumes of data, often
Cloud-resident, is an essential requirement in the Big Data era.
Enterprise-wide data systems need to be designed from the outset to support
usage of such Items across the spectrum of business use rather than from any
specific application view. The design philosophy advocated in this paper is to
drive the design process using a so-called description-driven approach which
enriches models with meta-data and description and focuses the design process
on Item re-use, thereby promoting traceability. Details are given of the
description-driven design of big data systems at CERN, in health informatics
and in business process management. Evidence is presented that the approach
leads to design simplicity and consequent ease of management thanks to loose
typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International
Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore
July 2015. arXiv admin note: text overlap with arXiv:1402.5764,
arXiv:1402.575
- âŠ