693 research outputs found

    Early aspects: aspect-oriented requirements engineering and architecture design

    Get PDF
    This paper reports on the third Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design Workshop, which has been held in Lancaster, UK, on March 21, 2004. The workshop included a presentation session and working sessions in which the particular topics on early aspects were discussed. The primary goal of the workshop was to focus on challenges to defining methodical software development processes for aspects from early on in the software life cycle and explore the potential of proposed methods and techniques to scale up to industrial applications

    Goal-Oriented Requirements Engineering: State of the Art and Research Trend

    Get PDF
    The Goal-Oriented Requirements Engineering (GORE) is one approach that is widely used for the early stages of software development. This method continues to develop in the last three decades. In this paper, a literature study is conducted to determine the GORE state of the art. The study begins with a Systematic Literature Review (SLR) was conducted to determine the research trend in the last five years. This study reviewed 126 papers published from 2016 to 2020.  The research continues with the author's search for scientific articles about GORE. There are 26 authors who actively publish GORE research results. Twenty-six authors were grouped into seven groups based on their relation or co-authoring scientific articles. An in-depth study of each group resulted in a holistic mapping of GORE research.  Based on the analysis, it is known that most research focuses on improving GORE for an automated and reliable RE process, developing new models/frameworks/methods originating from GORE, and implementing GORE for the RE process. This paper contributes to a holistic mapping of the GORE approach. Through this study, it is known the various studies that are being carried out and research opportunities to increase automation in the entire RE process

    ARCHITECTURE FOR A CBM+ AND PHM CENTRIC DIGITAL TWIN FOR WARFARE SYSTEMS

    Get PDF
    The Department of the Navy’s continued progression from time-based maintenance into condition-based maintenance plus (CBM+) shows the importance of increasing operational availability (Ao) across fleet weapon systems. This capstone uses the concept of digital efficiency from a digital twin (DT) combined with a three-dimensional (3D) direct metal laser melting printer as the physical host on board a surface vessel. The DT provides an agnostic conduit for combining model-based systems engineering with a digital analysis for real-time prognostic health monitoring while improving predictive maintenance. With the DT at the forefront of prioritized research and development, the 3D printer combines the value of additive manufacturing with complex systems in dynamic shipboard environments. To demonstrate that the DT possesses parallel abilities for improving both the physical host’s Ao and end-goal mission, this capstone develops a DT architecture and a high-level model. The model focuses on specific printer components (deionized [DI] water level, DI water conductivity, air filters, and laser motor drive system) to demonstrate the DT’s inherent effectiveness towards CBM+. To embody the system of systems analysis for printer suitability and performance, more components should be evaluated and combined with the ship’s environment data. Additionally, this capstone recommends the use of DTs as a nexus into more complex weapon systems while using a deeper level of design of experiment.Outstanding ThesisCivilian, Department of the NavyCommander, United States NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    Design and evaluation of the FAMILIAR tool

    Get PDF
    2014 Spring.Includes bibliographical references.Software Product Line Engineering (SPLE) aims to efficiently produce multiple software products, on a large scale, that share a common set of core development features. Feature Modeling is a popular SPLE technique used to describe variability in a product family. FAMILIAR (FeAture Model scrIpt Language for manipulation and Automatic Reasoning) is a Domain-Specific Modeling Language (DSML) for manipulating Feature Models (FMs). One of the strengths of the FAMILIAR language is that it provides rich semantics for FM composition operators (aggregate, merge, insert) as well as decomposition operators (slice). The main contribution of this thesis is to provide an integrated graphical modeling environment that significantly improves upon the initial FAMILIAR framework that was text-based and consisted of loosely coupled parts. As part of this thesis we designed and implemented a new FAMILIAR Tool that provides (1) a fast rendering framework for the graphically representing feature models, (2) a configuration editor and (3) persistence of feature models. Furthermore, we evaluated the usability of our new FAMILIAR Tool by performing a small experiment primarily focusing on assessing quality aspects of newly authored FMs as well as user effectiveness and efficiency

    Towards an Ontology-Based Approach for Reusing Non-Functional Requirements Knowledge

    Get PDF
    Requirements Engineering play a crucial role during the software development process. Many works have pointed out that Non-Functional Requirements (NFR) are currently more important than Functional Requirements. NFRs can be very complicated to understand due to its diversity and subjective nature. The NDR Framework has been proposed to fill some of the existing gaps to facilitate NFR elicitation and modeling. In this thesis, we introduce a tool that plays a major role in the NDR Framework allowing software engineers to store and reuse NFR knowledge. The NDR Tool converts the knowledge contained in Softgoal Interdependency Graphs (SIGs) into a machine-readable format that follows the NFR and Design Rationale (NDR) Ontology. It also provides mechanisms to query the knowledge base and produces graphical representation for the results obtained. To evaluate whether our approach aids eliciting NFRs, we conducted an experiment performing a software development scenario

    Management of quality requirements in agile and rapid software development: A systematic mapping study

    Get PDF
    Context: Quality requirements (QRs) describe the desired quality of software, and they play an important role in the success of software projects. In agile software development (ASD), QRs are often ill-defined and not well addressed due to the focus on quickly delivering functionality. Rapid software development (RSD) approaches (e.g., continuous delivery and continuous deployment), which shorten delivery times, are more prone to neglect QRs. Despite the significance of QRs in both ASD and RSD, there is limited synthesized knowledge on their management in those approaches. Objective: This study aims to synthesize state-of-the-art knowledge about QR management in ASD and RSD, focusing on three aspects: bibliometric, strategies, and challenges. Research method: Using a systematic mapping study with a snowballing search strategy, we identified and structured the literature on QR management in ASD and RSD. Results: We found 156 primary studies: 106 are empirical studies, 16 are experience reports, and 34 are theoretical studies. Security and performance were the most commonly reported QR types. We identified various QR management strategies: 74 practices, 43 methods, 13 models, 12 frameworks, 11 advices, 10 tools, and 7 guidelines. Additionally, we identified 18 categories and 4 non-recurring challenges of managing QRs. The limited ability of ASD to handle QRs, time constraints due to short iteration cycles, limitations regarding the testing of QRs and neglect of QRs were the top categories of challenges. Conclusion: Management of QRs is significant in ASD and is becoming important in RSD. This study identified research gaps, such as the need for more tools and guidelines, lightweight QR management strategies that fit short iteration cycles, investigations of the link between QRs challenges and technical debt, and extension of empirical validation of existing strategies to a wider context. It also synthesizes QR management strategies and challenges, which may be useful for practitioners.Peer ReviewedPostprint (author's final draft

    Designing A Standard-Based Approach for Security of Healthcare Systems

    Get PDF
    Healthcare systems in recent years have had the highest cost of breaches. Data security is one of the most obstacles encountered in the healthcare system, which could cancel the integrity, availability, and confidentiality of medical data. These breaches are expected to increase in the future. Therefore, it has become necessary to develop systems that provide full protection for patients. Healthcare systems security can be improved greatly by involving security requirements in the early phases of system implementation. Usually, the security requirements are only handled from a technical viewpoint during the implementation phases. When building security in the implementation phase, this leads to weakness in system security and an increase in violations. So, this research paper is aimed to improve the security of healthcare systems, by focusing on security requirements in the early phase, and making the healthcare systems less vulnerable to hacking or any external threat by restricting access to healthcare systems. This research paper proposes designing a standard-based approach to the security of the healthcare system, which analyzes and combines system and software security requirements required to gain a secure healthcare system architecture. Both types of security requirements are designed in the healthcare architecture based on the COSMIC ISO/IEC 19761 standards. A case study is introduced for the proposed standard-based approach experimented by using the system and software security requirements specifications to protect the pharmacy system in the healthcare system from ransomware

    Meeting the challenges of decentralized embedded applications using multi-agent systems

    No full text
    International audienceToday embedded applications become large scale andstrongly constrained. They require a decentralized embedded intelligencegenerating challenges for embedded systems. A multi-agent approach iswell suited to model and design decentralized embedded applications.It is naturally able to take up some of these challenges. But somespecific points have to be introduced, enforced or improved in multiagentapproaches to reach all features and all requirements. In thisarticle, we present a study of specific activities that can complementmulti-agent paradigm in the ”embedded” context.We use our experiencewith the DIAMOND method to introduce and illustrate these featuresand activities

    A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes

    Get PDF
    IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently. This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process model’s main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness. Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi cluster’s performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process model’s level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. Nutzungsbedarfsänderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der Rückschlüsse (rein) aus Fakten und Prämissen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die Beschränkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-Ansätze für dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch Abhängigkeiten zwischen und innerhalb einzelner Attribute ausreichend berücksichtigen können. Diese Arbeit präsentiert ein Prozessmodell für das integrierte Reasoning über quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewährleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. Zunächst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-Gerüst formalisiert. Anschließend wird das Gerüst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. Abschließend wird die hergeleitete Reasoning-Funktion verwendet, um mittels “What-if”–Analysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzuführen. Das Prozessmodell enthält fünf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewährleisten und Fehleranfälligkeit zu reduzieren. Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die Durchführbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten Ausführung von Hydro-Meteorologie-Modellen erläutert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten Automatisierungsansätze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen

    SecREP : A Framework for Automating the Extraction and Prioritization of Security Requirements Using Machine Learning and NLP Techniques

    Get PDF
    Gathering and extracting security requirements adequately requires extensive effort, experience, and time, as large amounts of data need to be analyzed. While many manual and academic approaches have been developed to tackle the discipline of Security Requirements Engineering (SRE), a need still exists for automating the SRE process. This need stems mainly from the difficult, error-prone, and time-consuming nature of traditional and manual frameworks. Machine learning techniques have been widely used to facilitate and automate the extraction of useful information from software requirements documents and artifacts. Such approaches can be utilized to yield beneficial results in automating the process of extracting and eliciting security requirements. However, the extraction of security requirements alone leaves software engineers with yet another tedious task of prioritizing the most critical security requirements. The competitive and fast-paced nature of software development, in addition to resource constraints make the process of security requirements prioritization crucial for software engineers to make educated decisions in risk-analysis and trade-off analysis. To that end, this thesis presents an automated framework/pipeline for extracting and prioritizing security requirements. The proposed framework, called the Security Requirements Extraction and Prioritization Framework (SecREP) consists of two parts: SecREP Part 1: Proposes a machine learning approach for identifying/extracting security requirements from natural language software requirements artifacts (e.g., the Software Requirement Specification document, known as the SRS documents) SecREP Part 2: Proposes a scheme for prioritizing the security requirements identified in the previous step. For the first part of the SecREP framework, three machine learning models (SVM, Naive Bayes, and Random Forest) were trained using an enhanced dataset the “SecREP Dataset” that was created as a result of this work. Each model was validated using resampling (80% of for training and 20% for validation) and 5-folds cross validation techniques. For the second part of the SecREP framework, a prioritization scheme was established with the aid of NLP techniques. The proposed prioritization scheme analyzes each security requirement using Part-of-speech (POS) and Named Entity Recognition methods to extract assets, security attributes, and threats from the security requirement. Additionally, using a text similarity method, each security requirement is compared to a super-sentence that was defined based on the STRIDE threat model. This prioritization scheme was applied to the extracted list of security requirements obtained from the case study in part one, and the priority score for each requirement was calculated and showcase
    • …
    corecore