12,455 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Web-based support for managing large collections of software artefacts

    Get PDF
    There has been a long history of CASE tool development, with an underlying software repository at the heart of most systems. Usually such tools, even the more recently web-based systems, are focused on supporting individual projects within an enterprise or across a number of distributed sites. Little support for maintaining large heterogeneous collections of software artefacts across a number of projects has been developed. Within the GENESIS project, this has been a key consideration in the development of the Open Source Component Artefact Repository (OSCAR). Its most recent extensions are explicitly addressing the provision of cross project global views of large software collections as well as historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR is widely adopted and various steps to facilitate this are described

    Practitioner requirements for integrated Knowledge-Based Engineering in Product Lifecycle Management.

    No full text
    The effective management of knowledge as capital is considered essential to the success of engineering product/service systems. As Knowledge Management (KM) and Product Lifecycle Management (PLM) practice gain industrial adoption, the question of functional overlaps between both the approaches becomes evident. This article explores the interoperability between PLM and Knowledge-Based Engineering (KBE) as a strategy for engineering KM. The opinion of key KBE/PLM practitioners are systematically captured and analysed. A set of ranked business functionalities to be fulfiled by the KBE/PLM systems integration is elicited. The article provides insights for the researchers and the practitioners playing both the user and development roles on the future needs for knowledge systems based on PLM

    Data Envelopment Analysis (Dea) approach In efficiency transport manufacturing industry in Malaysia

    Get PDF
    The objective of this study was to measure of technical efficiency, transport manufacturing industry in Malaysia score using the data envelopment analysis (DEA) from 2005 to 2010. The efficiency score analysis used only two inputs, i.e., capital and labor and one output i.e., total of sales. The results shown that the average efficiency score of the Banker, Charnes, Cooper - Variable Returns to Scale (BCC-VRS) model is higher than the Charnes, Cooper, Rhodes - Constant Return to Scale (CCR-CRS) model. Based on the BCC-VRS model, the average efficiency score was at a moderate level and only four sub-industry that recorded an average efficiency score more than 0.50 percent during the period study. The implication of this result suggests that the transport manufacturing industry needs to increase investment, especially in human capital such as employee training, increase communication expenses such as ICT and carry out joint ventures as well as research and development activities to enhance industry efficiency

    Support for collaborative component-based software engineering

    Get PDF
    Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described. This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject. In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice. Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects. Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools). Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying, assessing, coding, validating, design recovering, program understanding, and reusing. Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation, empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further advances

    Why and How Your Traceability Should Evolve: Insights from an Automotive Supplier

    Full text link
    Traceability is a key enabler of various activities in automotive software and systems engineering and required by several standards. However, most existing traceability management approaches do not consider that traceability is situated in constantly changing development contexts involving multiple stakeholders. Together with an automotive supplier, we analyzed how technology, business, and organizational factors raise the need for flexible traceability. We present how traceability can be evolved in the development lifecycle, from early elicitation of traceability needs to the implementation of mature traceability strategies. Moreover, we shed light on how traceability can be managed flexibly within an agile team and more formally when crossing team borders and organizational borders. Based on these insights, we present requirements for flexible tool solutions, supporting varying levels of data quality, change propagation, versioning, and organizational traceability.Comment: 9 pages, 3 figures, accepted in IEEE Softwar

    Service-oriented coordination platform for technology-enhanced learning

    Get PDF
    It is currently difficult to coordinate learning processes, not only because multiple stakeholders are involved (such as students, teachers, administrative staff, technical staff), but also because these processes are driven by sophisticated rules (such as rules on how to provide learning material, rules on how to assess students’ progress, rules on how to share educational responsibilities). This is one of the reasons for the slow progress in technology-enhanced learning. Consequently, there is a clear demand for technological facilitation of the coordination of learning processes. In this work, we suggest some solution directions that are based on SOA (Service-Oriented Architecture). In particular, we propose a coordination service pattern consistent with SOA and based on requirements that follow from an analysis of both learning processes and potentially useful support technologies. We present the service pattern considering both functional and non-functional issues, and we address policy enforcement as well. Finally, we complement our proposed architecture-level solution directions with an example. The example illustrates our ideas and is also used to identify: (i) a short list of educational IT services; (ii) related non-functional concerns; they will be considered in future work

    Will e-Science Be Open Science?

    Get PDF
    This contribution examines various aspects of “openness” in research, and seeks to gauge the degree to which contemporary “e-science” practices are congruent with “open science.” Norms and practices of openness are vital for the work of modern scientific communities, but concerns about the growth of stronger technical and institutional restraints on access to research tools, data, and information recently have attracted notice—in part because of their implications for the effective utilization of advanced digital infrastructures and information technologies in research collaborations. Our discussion clarifies the conceptual differences between e-science and open science, and reports findings from a preliminary look at practices in U.K. e-science projects. Both parts serve to emphasize that it is unwarranted to presume that the development of e-science necessarily promotes global open science collaboration. Since there is evident need for further empirical research to establish where, when, and to the extent “openness” and "e-ness" in scientific and engineering research may be expected to advance hand-in-hand, we outline a framework within which such a program of studies might be undertaken.e-Science, Open Science, Engineering Reserach

    Summary of the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1)

    Get PDF
    Challenges related to development, deployment, and maintenance of reusable software for science are becoming a growing concern. Many scientists’ research increasingly depends on the quality and availability of software upon which their works are built. To highlight some of these issues and share experiences, the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1) was held in November 2013 in conjunction with the SC13 Conference. The workshop featured keynote presentations and a large number (54) of solicited extended abstracts that were grouped into three themes and presented via panels. A set of collaborative notes of the presentations and discussion was taken during the workshop. Unique perspectives were captured about issues such as comprehensive documentation, development and deployment practices, software licenses and career paths for developers. Attribution systems that account for evidence of software contribution and impact were also discussed. These include mechanisms such as Digital Object Identifiers, publication of “software papers”, and the use of online systems, for example source code repositories like GitHub. This paper summarizes the issues and shared experiences that were discussed, including cross-cutting issues and use cases. It joins a nascent literature seeking to understand what drives software work in science, and how it is impacted by the reward systems of science. These incentives can determine the extent to which developers are motivated to build software for the long-term, for the use of others, and whether to work collaboratively or separately. It also explores community building, leadership, and dynamics in relation to successful scientific software
    corecore