3,507 research outputs found

    A Methodology for Operationalizing Enterprise Architecture and Evaluating Enterprise IT Flexibility

    Get PDF
    We propose a network-based methodology for analyzing a firm’s enterprise architecture. Our methodology uses “Design Structure Matrices” (DSMs) to capture the coupling between components in the architecture, including both business and technology-related elements. It addresses the limitations of prior work, in that it i) is based upon the actual architecture “in-use” as opposed to planned or “idealized” versions; ii) identifies discrete layers in a firm’s architecture associated with different technologies (e.g., applications, servers and databases); iii) reveals the main “flow of control” within an architecture (i.e., the set of inter-connected components); and iv) generates measures of architecture that can be used to predict performance. We demonstrate the application of our methodology using a novel dataset developed with the division of a large pharmaceutical firm. The dataset consists of all components in the enterprise architecture, the observed dependencies between them, and estimated costs of change for software applications within this architecture. We show that measures of the architecture derived from a DSM predict the cost of change for software applications. In particular, applications that are tightly coupled to other components in the architecture cost more to change. The analysis also shows that the measure of coupling that best predicts the cost of change is one that captures all direct and indirect connections between components (i.e., it captures the potential for changes to propagate via all possible paths between components). Our work represents an important step in making the concept of enterprise architecture more operational, thereby improving a firm’s ability to understand and improve its architecture over time

    Construction of a taxonomy for requirements engineering commercial-off-the-shelf components

    Get PDF
    This article presents a procedure for constructing a taxonomy of COTS products in the field of Requirements Engineering (RE). The taxonomy and the obtained information reach transcendental benefits to the selection of systems and tools that aid to RE-related actors to simplify and facilitate their work. This taxonomy is performed by means of a goal-oriented methodology inspired in GBRAM (Goal-Based Requirements Analysis Method), called GBTCM (Goal-Based Taxonomy Construction Method), that provides a guide to analyze sources of information and modeling requirements and domains, as well as gathering and organizing the knowledge in any segment of the COTS market. GBTCM claims to promote the use of standards and the reuse of requirements in order to support different processes of selection and integration of components.Peer ReviewedPostprint (published version

    A goal-oriented requirements modelling language for enterprise architecture

    Get PDF
    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for enterprise architecture focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of enterprise architectures in terms of stakeholder concerns and the high-level goals that address these concerns. This paper describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the paper illustrates how enterprise architecture can benefit from analysis techniques in the requirements domain

    Articulating design-time uncertainty with DRUIDE

    Full text link
    Les modélisateurs rencontrent souvent des incertitudes sur la manière de concevoir un modèle logiciel particulier. Les recherches existantes ont montré comment les modélisateurs peuvent travailler en présence de ce type d' ''incertitude au moment de la conception''. Cependant, le processus par lequel les développeurs en viennent à exprimer leurs incertitudes reste flou. Dans cette thèse, nous prenons des pas pour combler cette lacune en proposant de créer un langage de modélisation d'incertitude et une approche pour articuler l'incertitude au moment de la conception. Nous illustrons notre proposition sur un exemple et l'évaluons non seulement sur deux scénarios d'ingénierie logicielle, mais aussi sur une étude de cas réel basée sur les incertitudes causées par la pandémie COVID-19. Nous menons également un questionnaire post-étude avec les chercheurs qui ont participé à l'étude de cas. Afin de prouver la faisabilité de notre approche, nous fournissons deux outils et les discutons. Enfin, nous soulignons les avantages et discutons des limites de notre travail actuel.Modellers often encounter uncertainty about how to design a particular software model. Existing research has shown how modellers can work in the presence of this type of ''design-time uncertainty''. However, the process by which developers come to elicit and express their uncertainties remains unclear. In this thesis, we take steps to address this gap by proposing to create an uncertainty modelling language and an approach for articulating design-time uncertainty. We illustrate our proposal on a worked example and evaluate it not only on two software engineering scenarios, but also on a real case study based on uncertainties caused by the COVID-19 pandemic. We also conduct a post-study questionnaire with the researchers who participated in the case study. In order to prove the feasibility of our approach, we provide two tool supports and discuss them. Finally, we highlight the benefits and discuss the limitations of our current work

    A comparison of languages which operationalise and formalise {KADS} models of expertise

    Get PDF
    In the field of Knowledge Engineering, dissatisfaction with the rapid-prototyping approach has led to a number of more principled methodologies for the construction of knowledge-based systems. Instead of immediately implementing the gathered and interpreted knowledge in a given implementation formalism according to the rapid-prototyping approach, many such methodologies centre around the notion of a conceptual model: an abstract, implementation independent description of the relevant problem solving expertise. A conceptual model should describe the task which is solved by the system and the knowledge which is required by it. Although such conceptual models have often been formulated in an informal way, recent years have seen the advent of formal and operational languages to describe such conceptual models more precisely, and operationally as a means for model evaluation. In this paper, we study a number of such formal and operational languages for specifying conceptual models. In order to enable a meaningful comparison of such languages, we focus on languages which are all aimed at the same underlying conceptual model, namely that from the KADS method for building KBS. We describe eight formal languages for KADS models of expertise, and compare these languages with respect to their modelling primitives, their semantics, their implementations and their applications. Future research issues in the area of formal and operational specification languages for KBS are identified as the result of studying these languages. The paper also contains an extensive bibliography of research in this area

    An Open Platform for Modeling Method Conceptualization: The OMiLAB Digital Ecosystem

    Get PDF
    This paper motivates, describes, demonstrates in use, and evaluates the Open Models Laboratory (OMiLAB)—an open digital ecosystem designed to help one conceptualize and operationalize conceptual modeling methods. The OMiLAB ecosystem, which a generalized understanding of “model value” motivates, targets research and education stakeholders who fulfill various roles in a modeling method\u27s lifecycle. While we have many reports on novel modeling methods and tools for various domains, we lack knowledge on conceptualizing such methods via a full-fledged dedicated open ecosystem and a methodology that facilitates entry points for novices and an open innovation space for experienced stakeholders. This gap continues due to the lack of an open process and platform for 1) conducting research in the field of modeling method design, 2) developing agile modeling tools and model-driven digital products, and 3) experimenting with and disseminating such methods and related prototypes. OMiLAB incorporates principles, practices, procedures, tools, and services required to address the issues above since it focuses on being the operational deployment for a conceptualization and operationalization process built on several pillars: 1) a granularly defined “modeling method” concept whose building blocks one can customize for the domain of choice, 2) an “agile modeling method engineering” framework that helps one quickly prototype modeling tools, 3) a model-aware “digital product design lab”, and 4) dissemination channels for reaching a global community. In this paper, we demonstrate and evaluate the OMiLAB in research with two selected application cases for domain- and case-specific requirements. Besides these exemplary cases, OMiLAB has proven to effectively satisfy requirements that almost 50 modeling methods raise and, thus, to support researchers in designing novel modeling methods, developing tools, and disseminating outcomes. We also measured OMiLAB’s educational impact

    Entropy and Energy in Characterizing the Organization of Concept Maps in Learning Science

    Get PDF
    The coherence and connectivity of such knowledge representations is known to be closely related to knowledge production, acquisition and processing. In this study we use network theory in making the clustering and cohesion of concept maps measurable, and show how the distribution of these properties can be interpreted through the Maximum Entropy (MaxEnt) method. This approach allows to introduce new concepts of the “energy of cognitive load” and the “entropy of knowledge organization” to describe the organization of knowledge in the concept mapsPeer reviewe
    • …
    corecore