72,908 research outputs found

    Construction of a taxonomy for requirements engineering commercial-off-the-shelf components

    Get PDF
    This article presents a procedure for constructing a taxonomy of COTS products in the field of Requirements Engineering (RE). The taxonomy and the obtained information reach transcendental benefits to the selection of systems and tools that aid to RE-related actors to simplify and facilitate their work. This taxonomy is performed by means of a goal-oriented methodology inspired in GBRAM (Goal-Based Requirements Analysis Method), called GBTCM (Goal-Based Taxonomy Construction Method), that provides a guide to analyze sources of information and modeling requirements and domains, as well as gathering and organizing the knowledge in any segment of the COTS market. GBTCM claims to promote the use of standards and the reuse of requirements in order to support different processes of selection and integration of components.Peer ReviewedPostprint (published version

    A survey of agent-oriented methodologies

    Get PDF
    This article introduces the current agent-oriented methodologies. It discusses what approaches have been followed (mainly extending existing object oriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey

    XML for Domain Viewpoints

    Get PDF
    Within research institutions like CERN (European Organization for Nuclear Research) there are often disparate databases (different in format, type and structure) that users need to access in a domain-specific manner. Users may want to access a simple unit of information without having to understand detail of the underlying schema or they may want to access the same information from several different sources. It is neither desirable nor feasible to require users to have knowledge of these schemas. Instead it would be advantageous if a user could query these sources using his or her own domain models and abstractions of the data. This paper describes the basis of an XML (eXtended Markup Language) framework that provides this functionality and is currently being developed at CERN. The goal of the first prototype was to explore the possibilities of XML for data integration and model management. It shows how XML can be used to integrate data sources. The framework is not only applicable to CERN data sources but other environments too.Comment: 9 pages, 6 figures, conference report from SCI'2001 Multiconference on Systemics & Informatics, Florid

    Modelling the Developing Mind: From Structure to Change

    Get PDF
    This paper presents a theory of cognitive change. The theory assumes that the fundamental causes of cognitive change reside in the architecture of mind. Thus, the architecture of mind as specified by the theory is described first. It is assumed that the mind is a three-level universe involving (1) a processing system that constrains processing potentials, (2) a set of specialized capacity systems that guide understanding of different reality and knowledge domains, and (3) a hypecognitive system that monitors and controls the functioning of all other systems. The paper then specifies the types of change that may occur in cognitive development (changes within the levels of mind, changes in the relations between structures across levels, changes in the efficiency of a structure) and a series of general (e.g., metarepresentation) and more specific mechanisms (e.g., bridging, interweaving, and fusion) that bring the changes about. It is argued that different types of change require different mechanisms. Finally, a general model of the nature of cognitive development is offered. The relations between the theory proposed in the paper and other theories and research in cognitive development and cognitive neuroscience is discussed throughout the paper

    Ontology of core data mining entities

    Get PDF
    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines themost essential datamining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend

    Ontology-based patterns for the integration of business processes and enterprise application architectures

    Get PDF
    Increasingly, enterprises are using Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI). SOA has the potential to bridge the gap between business and technology and to improve the reuse of existing applications and the interoperability with new ones. In addition to service architecture descriptions, architecture abstractions like patterns and styles capture design knowledge and allow the reuse of successfully applied designs, thus improving the quality of software. Knowledge gained from integration projects can be captured to build a repository of semantically enriched, experience-based solutions. Business patterns identify the interaction and structure between users, business processes, and data. Specific integration and composition patterns at a more technical level address enterprise application integration and capture reliable architecture solutions. We use an ontology-based approach to capture architecture and process patterns. Ontology techniques for pattern definition, extension and composition are developed and their applicability in business process-driven application integration is demonstrated

    A group learning management method for intelligent tutoring systems

    Get PDF
    In this paper we propose a group management specification and execution method that seeks a compromise between simple course design and complex adaptive group interaction. This is achieved through an authoring method that proposes predefined scenarios to the author. These scenarios already include complex learning interaction protocols in which student and group models use and update are automatically included. The method adopts ontologies to represent domain and student models, and object Petri nets to specify the group interaction protocols. During execution, the method is supported by a multi-agent architecture

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
    corecore