53,530 research outputs found

    Software reuse issues affecting AdaNET

    Get PDF
    The AdaNet program is reviewing its long-term goals and strategies. A significant concern is whether current AdaNet plans adequately address the major strategic issues of software reuse technology. The major reuse issues of providing AdaNet services that should be addressed as part of future AdaNet development are identified and reviewed. Before significant development proceeds, a plan should be developed to resolve the aforementioned issues. This plan should also specify a detailed approach to develop AdaNet. A three phased strategy is recommended. The first phase would consist of requirements analysis and produce an AdaNet system requirements specification. It would consider the requirements of AdaNet in terms of mission needs, commercial realities, and administrative policies affecting development, and the experience of AdaNet and other projects promoting the transfer software engineering technology. Specifically, requirements analysis would be performed to better understand the requirements for AdaNet functions. The second phase would provide a detailed design of the system. The AdaNet should be designed with emphasis on the use of existing technology readily available to the AdaNet program. A number of reuse products are available upon which AdaNet could be based. This would significantly reduce the risk and cost of providing an AdaNet system. Once a design was developed, implementation would proceed in the third phase

    Ontology technology for the development and deployment of learning technology systems - a survey

    Get PDF
    The World-Wide Web is undergoing dramatic changes at the moment. The Semantic Web is an initiative to bring meaning to the Web. The Semantic Web is based on ontology technology – a knowledge representation framework – at its core. We illustrate the importance of this evolutionary development. We survey five scenarios demonstrating different forms of applications of ontology technologies in the development and deployment of learning technology systems. Ontology technologies are highly useful to organise, personalise, and publish learning content and to discover, generate, and compose learning objects

    Stabilizing knowledge through standards - A perspective for the humanities

    Get PDF
    It is usual to consider that standards generate mixed feelings among scientists. They are often seen as not really reflecting the state of the art in a given domain and a hindrance to scientific creativity. Still, scientists should theoretically be at the best place to bring their expertise into standard developments, being even more neutral on issues that may typically be related to competing industrial interests. Even if it could be thought of as even more complex to think about developping standards in the humanities, we will show how this can be made feasible through the experience gained both within the Text Encoding Initiative consortium and the International Organisation for Standardisation. By taking the specific case of lexical resources, we will try to show how this brings about new ideas for designing future research infrastructures in the human and social sciences

    Knowledge formalization in experience feedback processes : an ontology-based approach

    Get PDF
    Because of the current trend of integration and interoperability of industrial systems, their size and complexity continue to grow making it more difficult to analyze, to understand and to solve the problems that happen in their organizations. Continuous improvement methodologies are powerful tools in order to understand and to solve problems, to control the effects of changes and finally to capitalize knowledge about changes and improvements. These tools involve suitably represent knowledge relating to the concerned system. Consequently, knowledge management (KM) is an increasingly important source of competitive advantage for organizations. Particularly, the capitalization and sharing of knowledge resulting from experience feedback are elements which play an essential role in the continuous improvement of industrial activities. In this paper, the contribution deals with semantic interoperability and relates to the structuring and the formalization of an experience feedback (EF) process aiming at transforming information or understanding gained by experience into explicit knowledge. The reuse of such knowledge has proved to have significant impact on achieving themissions of companies. However, the means of describing the knowledge objects of an experience generally remain informal. Based on an experience feedback process model and conceptual graphs, this paper takes domain ontology as a framework for the clarification of explicit knowledge and know-how, the aim of which is to get lessons learned descriptions that are significant, correct and applicable

    Describing Scholarly Works with Dublin Core: A Functional Approach

    Get PDF
    This article describes the development of the Scholarly Works Application Profile (SWAP)—a Dublin Core application profile for describing scholarly texts. This work provides an active illustration of the Dublin Core Metadata Initiative (DCMI) “Singapore Framework” for Application Profiles, presented at the DCMI Conference in 2007, by incorporating the various elements of Application Profile building as defined by this framework—functional requirements, domain model, description set profile, usage guidelines, and data format. These elements build on the foundations laid down by the Dublin Core Abstract Model and utilize a preexisting domain model (FR-BR—Functional Requirements for Bibliographic Records) in order to support the representation of complex data describing multiple entities and their relationships. The challenges of engaging community acceptance and implementation will be covered, along with other related initiatives to support the growing corpus of scholarly resource types, such as data objects, geographic data, multimedia, and images whose structure and metadata requirements introduce the need for new application profiles. Finally, looking to other initiatives, the article will comment on how Dublin Core relates to the broader scholarly information world, where projects like Object Re-use and Exchange are attempting to better equip repositories to exchange resources

    Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback

    Get PDF
    Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid
    corecore