340 research outputs found

    Using empirical studies to mitigate symbol overload in iStar extensions

    Get PDF
    UID/CEC/04516/2019Modelling languages are frequently extended to include new constructs to be used together with the original syntax. New constructs may be proposed by adding textual information, such as UML stereotypes, or by creating new graphical representations. Thus, these new symbols need to be expressive and proposed in a careful way to increase the extension’s adoption. A method to create symbols for the original constructs of a modelling language was proposed and has been used to create the symbols when a new modelling language is designed. We argue this method can be used to recommend new symbols for the extension’s constructs. However, it is necessary to make some adjustments since the new symbols will be used with the existing constructs of the modelling language original syntax. In this paper, we analyse the usage of this adapted method to propose symbols to mitigate the occurrence of overloaded symbols in the existing iStar extensions. We analysed the existing iStar extensions in an SLR and identified the occurrence of symbol overload among the existing constructs. We identified a set of fifteen overloaded symbols in existing iStar extensions. We used these concepts with symbol overload in a multi-stage experiment that involved users in the visual notation design process. The study involved 262 participants, and its results revealed that most of the new graphical representations were better than those proposed by the extensions, with regard to semantic transparency. Thus, the new representations can be used to mitigate this kind of conflict in iStar extensions. Our results suggest that next extension efforts should consider user-generated notation design techniques in order to increase the semantic transparency.authorsversionpublishe

    An extension of iStar for Machine Learning requirements by following the PRISE methodology

    Get PDF
    The rise of Artificial Intelligence (AI) and Deep Learning has led to Machine Learning (ML) becoming a common practice in academia and enterprise. However, a successful ML project requires deep domain knowledge as well as expertise in a plethora of algorithms and data processing techniques. This leads to a stronger dependency and need for communication between developers and stakeholders where numerous requirements come into play. More specifically, in addition to functional requirements such as the output of the model (e.g. classification, clustering or regression), ML projects need to pay special attention to a number of non-functional and quality aspects particular to ML. These include explainability, noise robustness or equity among others. Failure to identify and consider these aspects will lead to inadequate algorithm selection and the failure of the project. In this sense, capturing ML requirements becomes critical. Unfortunately, there is currently an absence of ML requirements modeling approaches. Therefore, in this paper we present the first i* extension for capturing ML requirements and apply it to two real-world projects. Our study covers two main objectives for ML requirements: (i) allows domain experts to specify objectives and quality aspects to be met by the ML solution, and (ii) facilitates the selection and justification of the most adequate ML approaches. Our case studies show that our work enables better ML algorithm selection, preprocessing implementation tailored to each algorithm, and aids in identifying missing data. In addition, they also demonstrate the flexibility of our study to adapt to different domains.This work has been co-funded by the AETHER-UA project (PID2020-112540RB-C43), a smart data holistic approach for context-aware data analytics: smarter machine learning for business modeling and analytics, funded by the Spanish Ministry of Science and Innovation. And the BALLADEER (PROMETEO/2021/088) project, a Big Data analytical platform for the diagnosis and treatment of Attention Deficit Hyperactivity Disorder (ADHD) featuring extended reality, funded by the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital (Generalitat Valenciana). A. Reina-Reina (I-PI 13/20) hold Industrial PhD Grants co-funded by the University of Alicante and the Lucentia Lab Spin-off Company

    Formal verification of the extension of iStar to support Big data projects

    Get PDF
    Identifying all the right requirements is indispensable for the success of anysystem. These requirements need to be engineered with precision in the earlyphases. Principally, late corrections costs are estimated to be more than 200times as much as corrections during requirements engineering (RE). EspeciallyBig data area, it becomes more and more crucial due to its importance andcharacteristics. In fact, and after literature analyzing, we note that currentsRE methods do not support the elicitation of Big data projects requirements. Inthis study, we propose the BiStar novel method as extension of iStar to under-take some Big data characteristics such as (volume, variety ...etc). As a firststep, we identify some missing concepts that currents requirements engineeringmethods do not support. Next, BiStar, an extension of iStar is developed totake into account Big data specifics characteristics while dealing with require-ments. In order to ensure the integrity property of BiStar, formal proofs weremade, we perform a bigraph based description on iStar and BiStar. Finally, anapplication is conducted on iStar and BiStar for the same illustrative scenario.The BiStar shows important results to be more suitable for eliciting Big dataprojects requirements

    iStar 2.0 language guide

    Get PDF
    The i* modeling language was introduced to fill the gap in the spectrum of conceptual modeling languages, focusing on the intentional (why?), social (who?), and strategic (how? how else?) dimensions. i* has been applied in many areas, e.g., healthcare, security analysis, eCommerce. Although i* has seen much academic application, the diversity of extensions and variations can make it difficult for novices to learn and use it in a consistent way. This document introduces the iStar 2.0 core language, evolving the basic concepts of i* into a consistent and clear set of core concepts, upon which to build future work and to base goal-oriented teaching materials. This document was built from a set of discussions and input from various members of the i* community. It is our intention to revisit, update and expand the document after collecting examples and concrete experiences with iStar 2.0.Preprin

    Single object detection to support requirements modeling using faster R-CNN

    Get PDF
    Requirements engineering (RE) is one of the most important phases of a software engineering project in which the foundation of a software product is laid, objectives and assumptions, functional and non-functional needs are analyzed and consolidated. Many modeling notations and tools are developed to model the information gathered in the RE process, one popular framework is the iStar 2.0. Despite the frameworks and notations that are introduced, many engineers still find that drawing the diagrams is easier done manually by hand. Problem arises when the corresponding diagram needs to be updated as requirements evolve. This research aims to kickstart the development of a modeling tool using Faster Region-based Convolutional Neural Network for single object detection and recognition of hand-drawn iStar 2.0 objects, Gleam grayscale, and Salt and Pepper noise to digitalize hand-drawn diagrams. The single object detection and recognition tool is evaluated and displays promising results of an overall accuracy and precision of 95%, 100% for recall, and 97.2% for the F-1 score

    Ontological analysis of means-end links

    No full text
    The i* community has raised several main dialects and dozens of variations in the definition of the i* language. Differences may be found related not just to the representation of new concepts but to the very core of the i* language. In previous work we have tackled this issue mainly from a syntactic point of view, using metamodels and syntactic-based model interoperability frameworks. In this paper, we go one step beyond and consider the use of foundational ontologies in general, and UFO in particular, as a way to clarify the meaning of core i* constructs and as the basis to propose a normative definition. We focus here on one of the most characteristics i* constructs, namely means-end links.Postprint (published version

    Quality Evaluation of Requirements Models: The Case of Goal Models and Scenarios

    Get PDF
    Context: Requirements Engineering approaches provide expressive model techniques for requirements elicitation and analysis. Yet, these approaches struggle to manage the quality of their models, causing difficulties in understanding requirements, and increase development costs. The models’ quality should be a permanent concern. Objectives: We propose a mixed-method process for the quantitative evaluation of the quality of requirements models and their modelling activities. We applied the process to goal-oriented (i* 1.0 and iStar 2.0) and scenario-based (ARNE and ALCO use case templates) models, to evaluate their usability in terms of appropriateness recognisability and learnability. We defined (bio)metrics about the models and the way stakeholders interact with them, with the GQM approach. Methods: The (bio)metrics were evaluated through a family of 16 quasi-experiments with a total of 660 participants. They performed creation, modification, understanding, and review tasks on the models. We measured their accuracy, speed, and ease, using metrics of task success, time, and effort, collected with eye-tracking, electroencephalography and electro-dermal activity, and participants’ opinion, through NASA-TLX. We characterised the participants with GenderMag, a method for evaluating usability with a focus on gender-inclusiveness. Results: For i*, participants had better performance and lower effort when using iStar 2.0, and produced models with lower accidental complexity. For use cases, participants had better performance and lower effort when using ALCO. Participants using a textual representation of requirements had higher performance and lower effort. The results were better for ALCO, followed by ARNE, iStar 2.0, and i* 1.0. Participants with a comprehensive information processing and a conservative attitude towards risk (characteristics that are frequently seen in females) took longer to start the tasks but had a higher accuracy. The visual and mental effort was also higher for these participants. Conclusions: A mixed-method process, with (bio)metric measurements, can provide reliable quantitative information about the success and effort of a stakeholder while working on different requirements models’ tasks
    corecore