24,762 research outputs found
Highly focused document retrieval in aerospace engineering : user interaction design and evaluation
Purpose â This paper seeks to describe the preliminary studies (on both users and data), the design and evaluation of the K-Search system for searching legacy documents in aerospace engineering. Real-world reports of jet engine maintenance challenge the current indexing practice, while real usersâ tasks require retrieving the information in the proper context. K-Search is currently in use in Rolls-Royce plc and has evolved to include other tools for knowledge capture and management.
Design/methodology/approach â Semantic Web techniques have been used to automatically extract information from the reports while maintaining the original context, allowing a more focused retrieval than with more traditional techniques. The paper combines semantic search with classical information retrieval to increase search effectiveness. An innovative user interface has been designed to take advantage of this hybrid search technique. The interface is designed to allow a flexible and
personal approach to searching legacy data.
Findings â The user evaluation showed that the system is effective and well received by users. It also shows that different people look at the same data in different ways and make different use of the same system depending on their individual needs, influenced by their job profile and personal attitude.
Research limitations/implications â This study focuses on a specific case of an enterprise working in aerospace engineering. Although the findings are likely to be shared with other engineering domains (e.g. mechanical, electronic), the study does not expand the evaluation to different settings.
Originality/value â The study shows how real context of use can provide new and unexpected challenges to researchers and how effective solutions can then be adopted and used in organizations.</p
Recommended from our members
A Semantic-based framework for discovering business process patterns
Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modeling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. This paper focuses on business process patterns and proposes an initial framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework synthesizes the idea from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse
Recommended from our members
Designing a consulting services architecture model
textDuring my years of experience in the technology industry, it has become obvious that standard processes and methodologies within the engineering discipline are at a mature state. The realization though is that software engineering specifically lags behind. Most software engineering methodologies that I have studied focus on the mission of software development. It is this realization and the need for structure that led me to review existing methodologies used within my company's software services organization. The definition of what a successful software services methodology entails is rather limited. This report will provide a history of existing software engineering methodologies that I have studied, describe an initial services method that was being developed within my organization, develop a new model that addresses previous shortcomings and identify additional components required to further define a strong software services-oriented delivery methodology.Electrical and Computer Engineerin
Metadata
Metadata, or data about data, play a crucial rule in social sciences to ensure that high quality documentation and community knowledge are properly captured and surround the data across its entire life cycle, from the early stages of production to secondary analysis by researchers or use by policy makers and other key stakeholders. The paper provides an overview of the social sciences metadata landscape, best practices and related information technologies. It particularly focuses on two specifications - the Data Documentation Initiative (DDI) and the Statistical Data and Metadata Exchange Standard (SDMX) - seen as central to a global metadata management framework for social data and official statistics. It also highlights current directions, outlines typical integration challenges, and provides a set of high level recommendations for producers, archives, researchers and sponsors in order to foster the adoption of metadata standards and best practices in the years to come.social sciences, metadata, data, statistics, documentation, data quality, XML, DDI, SDMX, archive, preservation, production, access, dissemination, analysis
Towards customization : Evaluation of integrated sales, product, and production configuration
Acknowledgement We are grateful to the anonymous reviewers for their constructive comments, which helped us improve both the quality and presentation of the paper.Peer reviewedPostprin
Recommended from our members
An ontological approach for recovering legacy business content
Legacy Information Systems (LIS) pose a challenge for many organizations. On one hand, LIS are viewed as aging systems needing replacement; on the other hand, years of accumulated business knowledge have made these systems mission-critical. Current approaches however are often criticized for being overtly dependent on technology and ignoring the business knowledge which resides within LIS. In this light, this paper proposes a means of capturing the business knowledge in a technology agnostic manner and transforming it in a way that reaps the benefits of clear semantic expression - this transformation is achieved via the careful use of ontology. The approach called Content Sophistication (CS) aims to provide a model of the business that more closely adheres to the semantics and relationships of objects existing in the real world. The approach is illustrated via an example taken from a case study concerning the renovation of a large financial system and the outcome of the approach results in technology agnostic models that show improvements along several dimensions
Recommended from our members
Design for manufacture and sustainability in new product development
Design for manufacture is well recognised by industry and is about optimising design to aid production. Today there is a significant and growing trend of recognising what happens to a product once its user phase has finished. Post-consumer processes are now an important consideration during the ab-initio stages of design. Rather than a focus limited to design for manufacture or (more recently) design for assembly now the pressure is on for post consumer design. Companies need to do this because legislative pressures are increasing and consumers are becoming ever more aware of, and concerned about, environmental issues. End-of-life processing and design for the environment are therefore areas of growing of interest. This conference paper investigates with industry practitioners their experiences regarding for both the environmental and economic advantages of product life-cycle planning. Legislative pressures and consumer awareness are driving businesses to develop sustainable product design strategies (Jones et al, 2001 p. 27). Changes within the law, to protect our environment, cause companies to pay attention as they begin to affect profitability. The first British Standard to address design for end-of-life processing, and therefore support industry, is BS 8887-1. Over 60 UK manufacturing and design companies that had bought BS 8887-1 contributed to this by being interviewed or providing a written response. The research investigated multiple aspects of sustainable design in practice however, in this conference paper the focus is its application within the design process
Usability and open source software.
Open source communities have successfully developed many pieces of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open-source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability
Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web
The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the authorâs and shouldnât be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very
instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that
they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our
technologies is still barely visible. McLuhanâs predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet
the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the
services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge
management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The
combination of this expertise, and the time and space afforded the consortium by the
IRC structure, suggested the opportunity for a concerted effort to develop an approach
to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to
the knowledge management services AKT tries to provide. As a medium for the
semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the
provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different
applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing
ontologies to create a third). Ontology mapping, and the elimination of conflicts of
reference, will be important tasks. All of these issues are discussed along with our
proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices
that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which
semantic hygiene prevails interesting enough to reason in? These and many other
questions need to be addressed if we are to provide effective knowledge technologies
for our content on the web
- âŠ