11 research outputs found
Multi-Media and Web-based Evaluation of Design Artifacts - Syntactic, Semantic and Pragmatic Quality of Process Models
Evaluation of design artifacts is of crucial importance in design science research (DSR). A plethora of evaluation approaches and methods can be found in literature; nevertheless, little work has been done so far to investigate the relation between the evaluation strategies, methods an d techniques in DSR evaluations. Prototype implementations, together with case studies seem to be dominant and the technique of choice to evaluate, often complex artifacts. This paper goes beyond the common approach in DSR, and presents a multi-media and web-based DSR evaluation approach focussing on syntactic, semantic and pragmatic quality. We present the definition of evaluation criteria, the selection of evaluation methods and the findings and experiences gained. The results of this paper can support other design science re-search approaches concerned with the evaluation of concepts or process models
Engineering transparency requirements: A modelling and analysis framework
Transparency is a requirement that denotes the communication of information that should help audience to take informed decisions. The existing research on transparency in information systems usually focuses on the party who provides transparency and its inter-relation with other requirements such as privacy, security and regulatory requirements. Engineering transparency, however, also requires the analysis of the information receivers’ situation and their transparency requirements and the medium used to communicate and present the information. A holistic consideration of transparency will enhance its management and increase its usefulness. In this paper, we provide a novel engineering framework, consisting of a modelling language and nine analytical reasonings, which is meant to represent transparency requirements and detect a set of possible side-effects. Examples of such detections include detecting information overload, information starvation, and transparency leading to biased decisions. We then evaluate the modelling language through a case study and report the results
A framework for evaluating the quality of modelling languages in MDE environments
This thesis presents the Multiple Modelling Quality Evaluation Framework method (hereinafter MMQEF), which is a conceptual, methodological, and technological framework for evaluating quality issues in modelling languages and modelling elements by the application of a taxonomic analysis. It derives some analytic procedures that support the detection of quality issues in model-driven projects, such as the suitability of modelling languages, traces between abstraction levels, specification for model transformations, and integration between modelling proposals. MMQEF also suggests metrics to perform analytic procedures based on the classification obtained for the modelling languages and artifacts under evaluation.
MMQEF uses a taxonomy that is extracted from the Zachman framework for Information Systems (Zachman, 1987; Sowa and Zachman, 1992), which proposed a visual language to classify elements that are part of an Information System (IS). These elements can be from organizational to technical artifacts. The visual language contains a bi-dimensional matrix for classifying IS elements (generally expressed as models) and a set of seven rules to perform the classification. As an evaluation method, MMQEF defines activities in order to derive quality analytics based on the classification applied on modelling languages and elements. The Zachman framework was chosen because it was one of the first and most precise proposals for a reference architecture for IS, which is recognized by important standards such as the ISO 42010 (612, 2011).
This thesis presents the conceptual foundation of the evaluation framework, which is based on the definition of quality for model-driven engineering (MDE). The methodological and technological support of MMQEF is also described. Finally, some validations for MMQEF are reported.Esta tesis presenta el método MMQEF (Multiple Modelling Quality Evaluation Framework), el cual es un marco de trabajo conceptual, metodológico y tecnológico para evaluar aspectos de calidad sobre lenguajes y elementos de modelado mediante la aplicación de análisis taxonómico. El método deriva procedimientos analÃticos que soportan la detección de aspectos de calidad en proyectos model-driven tales como: idoneidad de lenguajes de modelado, trazabilidad entre niveles de abstracción, especificación de transformación de modelos, e integración de propuestas de modelado. MMQEF también sugiere métricas para ejecutar procedimientos analÃticos basados en la clasificación obtenida para los lenguajes y artefactos de modelado bajo evaluación.
MMQEF usa una taxonomÃa para Sistemas de Información basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Dicha taxonomÃa propone un lenguaje visual para clasificar elementos que hacen parte de un Sistema de Información. Los elementos pueden ser artefactos asociados a niveles desde organizacionales hasta técnicos. El lenguaje visual contiene una matriz bidimensional para clasificar elementos de Sistemas de Información, y un conjunto de siete reglas para ejecutar la clasificación. Como método de evaluación MMEQF define actividades para derivar analÃticas de calidad basadas en la clasificación aplicada sobre lenguajes y elementos de modelado. El marco Zachman fue seleccionado debido a que éste fue una de las primeras y más precisas propuestas de arquitectura de referencia para Sistemas de Información, siendo ésto reconocido por destacados estándares como ISO 42010 (612, 2011).
Esta tesis presenta los fundamentos conceptuales del método de evaluación basado en el análisis de la definición de calidad en la ingenierÃa dirigida por modelos (MDE). Posteriormente se describe el soporte metodológico y tecnológico de MMQEF, y finalmente se reportan validaciones.Aquesta tesi presenta el mètode MMQEF (Multiple Modelling Quality Evaluation Framework), el qual és un marc de treball conceptual, metodològic i tecnològic per avaluar aspectes de qualitat sobre llenguatges i elements de modelatge mitjançant l'aplicació d'anà lisi taxonòmic. El mètode deriva procediments analÃtics que suporten la detecció d'aspectes de qualitat en projectes model-driven com ara: idoneïtat de llenguatges de modelatge, traçabilitat entre nivells d'abstracció, especificació de transformació de models, i integració de propostes de modelatge. MMQEF també suggereix mètriques per executar procediments analÃtics basats en la classificació obtinguda pels llenguatges i artefactes de mode-lat avaluats.
MMQEF fa servir una taxonomia per a Sistemes d'Informació basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Aquesta taxonomia proposa un llenguatge visual per classificar elements que fan part d'un Sistema d'Informació. Els elements poden ser artefactes associats a nivells des organitzacionals fins tècnics. El llenguatge visual conté una matriu bidimensional per classificar elements de Sistemes d'Informació, i un conjunt de set regles per executar la classificació. Com a mètode d'avaluació MMEQF defineix activitats per derivar analÃtiques de qualitat basades en la classificació aplicada sobre llenguatges i elements de modelatge. El marc Zachman va ser seleccionat a causa de que aquest va ser una de les primeres i més precises propostes d'arquitectura de referència per a Sistemes d'Informació, sent això reconegut per destacats està ndards com ISO 42010 (612, 2011).
Aquesta tesi presenta els fonaments conceptuals del mètode d'avaluació basat en l'anà lisi de la definició de qualitat en l'enginyeria dirigida per models (MDE). Posteriorment es descriu el suport metodològic i tecnològic de MMQEF, i finalment es reporten validacions.Giraldo Velásquez, FD. (2017). A framework for evaluating the quality of modelling languages in MDE environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90628TESI
Factors Affecting the Quality of Enterprise Architecture Models
We start our research by introducing the subject of Enterprise Architecture (EA), its content and
purpose, as well as discussing what we mean by a ‘model’, and ‘quality’, building on concepts from
semiotics and in particular on conceptual model quality.
We set out to answer three questions. The first deals with how we measure the quality of a set of
Enterprise Architecture models, and to answer this we produce a mathematical framework and then
test it using a case study. This extends the conceptual model quality work done by Lindland and
Krogstie into the realm of Enterprise Architecture, adding new aspects related to completeness of sets
of models, modelling maturity as well as conditions for increasing quality. This incorporates
mathematical concepts, including set theory and calculus, and proposes three specific metrics for the
quality of sets of models (related to truthfulness, syntax and completeness). This uses a simple case
study, based upon purely quantitative data, sampling the contents of an existing Enterprise
Architecture repository.
The second deals with how we measure the effectiveness of the language used in Enterprise
Architecture models. We again use mathematical techniques to construct metrics, this time related to
comprehension and utility: the former incorporating a triangulation technique based upon Kvanvig’s
concept of moderate factivity of objectual understanding, and the latter being a more subjective
measure (i.e. self-assessment). From these two metrics we provide a new conceptual visualisation of
the effectiveness of language concepts. We then test this framework using a mixed-mode case study,
carrying out 68 interviews, based mostly upon quantitative data again but with additional elements of
qualitative data. Although the conceptual framework is independent of any particular language, in
order to test it we actually need to select an Enterprise Architecture framework, or more specifically,
the modelling language within such a framework; the framework we choose for this purpose is
ArchiMate. Through the use of alternative modelling notations in the survey process, we gain insights
not just into the understanding and utility of various ArchiMate concepts, as perceived by respondents,
we also gain insights into the effect of understanding and utility of using the specific notation provided
by ArchiMate through the use of differential analysis of the result sets thus obtained.
The final question we address is more practically focused and deals with how we can specify and
automate various kinds of changes to Enterprise Architecture models based upon the previous
research. We construct a conceptual framework illustrating the kinds of transformations that may be
required, given what we have learnt in the previous chapters, demonstrate that these can be
deterministic and finally demonstrate, by use of a specific Enterprise Architecture modelling tool
(BiZZdesign), that they can be implemented in software, and thus automated.
vii
In the course of our research, we deliver reusable methodologies and frameworks that will assist
future researchers into Enterprise Architecture and related frameworks, as well as Enterprise
Architecture practitioners
A Graphical Approach to Security Risk Analysis
"The CORAS language is a graphical modeling language used to support the security analysis process with its customized diagrams. The language has been developed within the research project "SECURIS" (SINTEF ICT/University of Oslo), where it has been applied and evaluated in seven major industrial field trials.
Experiences from the field trials show that the CORAS language has contributed to a more actively involvement of the participants, and it has eased the communication within the analysis group. The language has been found easy to understand and suitable for presentation purposes.
With time we have become more and more dependent on various kinds of computerized systems. When the complexity of the systems increases, the number of security risks is likely to increase. Security analyses are often considered complicated and time consuming. A well developed security analysis method should support the analysis process by simplifying communication, interaction and understanding between the participants in the analysis.
This thesis describes the development of the CORAS language that is particularly suited for security analyses where "structured brainstorming" is part of the process. Important design decisions are based on empirical investigations. The thesis has resulted in the following artifacts:
- A modeling guideline that explains how to draw the different kind of diagrams for each step of the analysis.
- Rules for translation which enables consistent translation from graphical diagrams to text.
- Concept definitions that contributes to a consistent use of security analysis terms.
- An evaluation framework to evaluate and compare the quality of security analysis modeling languages.
Engineering of transparency requirements in business information systems
Transparency is defined as the open flow of high quality information in a meaningful and useful manner amongst stakeholders in a business information system. Therefore transparency is a requirement of businesses and their information systems. It is typically linked to positive ethical and economic attributes, such as trust and accountability. Despite its importance, transparency is often studied as a secondary concept and viewed through the lenses of adjacent concepts such as security, privacy and regulatory requirements. This has led to a reduced ability to manage transparency and deal with its peculiarities as a first-class requirement. Ad-hoc introduction of transparency may have adverse effects, such as information overload and reduced collaboration. The thesis contributes to the knowledge on transparency requirements by proposing the following. First, this thesis proposes four reference models for transparency. These reference models are based on an extensive literature study in multiple disciplines and provide a foundation for the engineering of transparency requirements in a business information system. Second, this thesis proposes a modelling language for modelling and analysing transparency requirements amongst stakeholders in a business information system. This modelling language is based on the proposed four reference models for transparency. Third, this thesis proposes a method for the elicitation and adaptation of transparency requirements in a business information system. It covers the entire life cycle of transparency requirements and utilises the transparency modelling language for modelling and analysis of transparency requirements. It benefits from three concepts of crowdsourcing, structured feedback acquisition and social adaptation for the elicitation and adaptation of transparency requirements. The thesis also evaluates the transparency modelling language in terms of its usefulness and quality using two different case studies. Then, the feedback acquisition section in the transparency elicitation and adaptation method is evaluated using a third case study. The results of these case studies illustrate the potentials and applicability of both the modelling language and the method in the engineering of transparency requirements in business information systems
Designing Digital Work
Combining theory, methodology and tools, this open access book illustrates how to guide innovation in today’s digitized business environment. Highlighting the importance of human knowledge and experience in implementing business processes, the authors take a conceptual perspective to explore the challenges and issues currently facing organizations. Subsequent chapters put these concepts into practice, discussing instruments that can be used to support the articulation and alignment of knowledge within work processes. A timely and comprehensive set of tools and case studies, this book is essential reading for those researching innovation and digitization, organization and business strategy
Recommended from our members
Developing a data quality scorecard that measures data quality in a data warehouse
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe main purpose of this thesis is to develop a data quality scorecard (DQS) that aligns the data quality needs of the Data warehouse stakeholder group with selected data quality dimensions. To comprehend the research domain, a general and systematic literature review (SLR) was carried out, after which the research scope was established. Using Design Science Research (DSR) as the methodology to structure the research, three iterations were carried out to achieve the research aim highlighted in this thesis. In the first iteration, as DSR was used as a paradigm, the artefact was build from the results of the general and systematic literature review conduct. A data quality scorecard (DQS) was conceptualised. The result of the SLR and the recommendations for designing an effective scorecard provided the input for the development of the DQS. Using a System Usability Scale (SUS), to validate the usability of the DQS, the results of the first iteration suggest that the DW stakeholders found the DQS useful. The second iteration was conducted to further evaluate the DQS through a run through in the FMCG domain and then conducting a semi-structured interview. The thematic analysis of the semi-structured interviews demonstrated that the stakeholder's participants‘ found the DQS to be transparent; an additional reporting tool; Integrates; easy to use; consistent; and increases confidence in the data. However, the timeliness data dimension was found to be redundant, necessitating a modification to the DQS. The third iteration was conducted with similar steps as the second iteration but with the modified DQS in the oil and gas domain. The results from the third iteration suggest that DQS is a useful tool that is easy to use on a daily basis. The research contributes to theory by demonstrating a novel approach to DQS design This was achieved by ensuring the design of the DQS aligns with the data quality concern areas of the DW stakeholders and the data quality dimensions. Further, this research lay a good foundation for the future by establishing a DQS model that can be used as a base for further development