14 research outputs found

    A Framework for Specifying Business Rules Based on Logic with a Syntax Close to Natural Language

    Get PDF
    The systematic interaction of software developers with the business domain experts that are usually no software developers is crucial to software system maintenance and creation and has surfaced as the big challenge of modern software engineering. Existing frameworks promoting the typical programming languages with artificial syntax are suitable to be processed by computers but do not cater to domain experts, who are used to documents written in natural language as a means of interaction.Other frameworks that claim to be fully automated, such as those using natural language processing, are too imprecise to handle the typical requirements documents written in heterogeneous natural language flavours. In this thesis, a framework is proposed that can support the specification of business rules that is, on the one hand, understandable for nonprogrammers and on the other hand semantically founded, which enables computer processability. This is achieved by the novel language Adaptive Business Process and Rule Integration Language (APRIL). Specifications in APRIL can be written in a style close to natural language and are thus suitable for humans, which was empirically evaluated with a representative group of test persons. A useful and uncommon feature of APRIL is the ability to define reusable abstract mixfix operators as sentence patterns, that can mimic natural language. The semantic underpinning of the mixfix operators is achieved by customizable atomic formulas, allowing to tailor APRIL to specific domains. Atomic formulas are underpinned by a denotational semantics, which is based on Tempura (executable subset of Interval Temporal Logic (ITL)) to describe behaviour and the Object Constraint Language (OCL) to describe invariants and pre- and postconditions. APRIL statements can be used as the basis for automatically generating test code for software systems. An additional aspect of enhancing the quality of specification documents comes with a novel formal method technique (ISEPI) applicable to behavioural business rules semantically based on Propositional Interval Temporal Logic (PITL) and complying with the newly discovered 2-to-1 property. This work discovers how the ISE subset of ISEPI can be used to express complex behavioural business rules in a more concise and understandable way. The evaluation of ISE is done by an example specification taken from the car industry describing system behaviour, using the tools MONA and PITL2MONA. Finally, a methodology is presented that helps to guide a continuous transformation starting from purely natural language business rule specification to the APRIL specification which can then be transformed to test code. The methodologies, language concepts, algorithms, tools and techniques devised in this work are part of the APRIL-framework

    Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

    Get PDF
    International audienceIn this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL) principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer"s cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions

    OpenUP/MDRE: A Model-Driven Requirements Engineering Approach for Health-Care Systems

    Full text link
    The domains and problems for which it would be desirable to introduce information systems are currently very complex and the software development process is thus of the same complexity. One of these domains is health-care. Model-Driven Development (MDD) and Service-Oriented Architecture (SOA) are software development approaches that raise to deal with complexity, to reduce time and cost of development, augmenting flexibility and interoperability. However, many techniques and approaches that have been introduced are of little use when not provided under a formalized and well-documented methodological umbrella. A methodology gives the process a well-defined structure that helps in fast and efficient analysis and design, trouble-free implementation, and finally results in the software product improved quality. While MDD and SOA are gaining their momentum toward the adoption in the software industry, there is one critical issue yet to be addressed before its power is fully realized. It is beyond dispute that requirements engineering (RE) has become a critical task within the software development process. Errors made during this process may have negative effects on subsequent development steps, and on the quality of the resulting software. For this reason, the MDD and SOA development approaches should not only be taken into consideration during design and implementation as usually occurs, but also during the RE process. The contribution of this dissertation aims at improving the development process of health-care applications by proposing OpenUP/MDRE methodology. The main goal of this methodology is to enrich the development process of SOA-based health-care systems by focusing on the requirements engineering processes in the model-driven context. I believe that the integration of those two highly important areas of software engineering, gathered in one consistent process, will provide practitioners with many benets. It is noteworthy that the approach presented here was designed for SOA-based health-care applications, however, it also provides means to adapt it to other architectural paradigms or domains. The OpenUP/MDRE approach is an extension of the lightweight OpenUP methodology for iterative, architecture-oriented and model-driven software development. The motivation for this research comes from the experience I gained as a computer science professional working on the health-care systems. This thesis also presents a comprehensive study about: i) the requirements engineering methods and techniques that are being used in the context of the model-driven development, ii) known generic but flexible and extensible methodologies, as well as approaches for service-oriented systems development, iii) requirements engineering techniques used in the health-care industry. Finally, OpenUP/MDRE was applied to a concrete industrial health-care project in order to show the feasibility and accuracy of this methodological approach.Loniewski, G. (2010). OpenUP/MDRE: A Model-Driven Requirements Engineering Approach for Health-Care Systems. http://hdl.handle.net/10251/11652Archivo delegad

    User Review Analysis for Requirement Elicitation: Thesis and the framework prototype's source code

    Get PDF
    Online reviews are an important channel for requirement elicitation. However, requirement engineers face challenges when analysing online user reviews, such as data volumes, technical supports, existing techniques, and legal barriers. Juan Wang proposes a framework solving user review analysis problems for the purpose of requirement elicitation that sets up a channel from downloading user reviews to structured analysis data. The main contributions of her work are: (1) the thesis proposed a framework to solve the user review analysis problem for requirement elicitation; (2) the prototype of this framework proves its feasibility; (3) the experiments prove the effectiveness and efficiency of this framework. This resource here is the latest version of Juan Wang's PhD thesis "User Review Analysis for Requirement Elicitation" and all the source code of the prototype for the framework as the results of her thesis

    Architectural Design Decision Documentation through Reuse of Design Patterns

    Get PDF
    The ADMD3 approach presented in this book enchances the architectural design documentation of decision via reuse of design patterns. It combines the support for evaluation of pattern application, semi-automated documentation of decision rationale and trace links. The approach is based on a new kind of design pattern catalogue, whereby usual pattern descriptions are captured together with question annotations to the patterns and information on architectural structure of patterns

    Architectural Design Decision Documentation through Reuse of Design Patterns

    Get PDF
    While design decisions on the application of architectural design patterns involve complex trade-offs between functionality and quality properties, such decisions are often spontaneous, and documentation of decisions and trace links to related artefacts is usually insufficient. The approach proposed in this thesis provides a support to overcome these problems. It combines support for evaluation of design pattern application, and semi-automated documentation of decisions and trace links

    On the Development and Management of Adaptive Business Collaborations.

    Get PDF
    Today’s business climate demands a high rate of change with which Information Technology (IT)-minded organizations are required to cope. Organizations face rapidly changing market conditions, new competitive pressures, new regulatory fiats that demand compliance, and new competitive threats. All of these situations and more drive the need for the IT infrastructure of an organization to respond quickly in support of new business models and requirements. This dissertation studies the adaptive development and management of such dynamic business models and requirements. A rule based environment is developed in which the people who develop and manage business collaborations in organizations can do so in a way that is as independent of specific implementation technologies as possible; and where they can take business requirements into consideration, and in which they can respond to changes as effectively as possible.

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    From Data Modeling to Knowledge Engineering in Space System Design

    Get PDF
    The technologies currently employed for modeling complex systems, such as aircraft, spacecraft, or infrastructures, are sufficient for system description, but do not allow deriving knowledge about the modeled systems. This work provides the means to describe space systems in a way that allows automating activities such as deriving knowledge about critical parts of the system’s design, evaluation of test success, and identification of single points of failure

    Testing and test-driven development of conceptual schemas

    Get PDF
    The traditional focus for Information Systems (IS) quality assurance relies on the evaluation of its implementation. However, the quality of an IS can be largely determined in the first stages of its development. Several studies reveal that more than half the errors that occur during systems development are requirements errors. A requirements error is defined as a mismatch between requirements specification and stakeholders¿ needs and expectations. Conceptual modeling is an essential activity in requirements engineering aimed at developing the conceptual schema of an IS. The conceptual schema is the general knowledge that an IS needs to know in order to perform its functions. A conceptual schema specification has semantic quality when it is valid and complete. Validity means that the schema is correct (the knowledge it defines is true for the domain) and relevant (the knowledge it defines is necessary for the system). Completeness means that the conceptual schema includes all relevant knowledge. The validation of a conceptual schema pursues the detection of requirements errors in order to improve its semantic quality. Conceptual schema validation is still a critical challenge in requirements engineering. In this work we contribute to this challenge, taking into account that, since conceptual schemas of IS can be specified in executable artifacts, they can be tested. In this context, the main contributions of this Thesis are (1) an approach to test conceptual schemas of information systems, and (2) a novel method for the incremental development of conceptual schemas supported by continuous test-driven validation. As far as we know, this is the first work that proposes and implements an environment for automated testing of UML/OCL conceptual schemas, and the first work that explores the use of test-driven approaches in conceptual modeling. The testing of conceptual schemas may be an important and practical means for their validation. It allows checking correctness and completeness according to stakeholders¿ needs and expectations. Moreover, in conjunction with the automatic check of basic test adequacy criteria, we can also analyze the relevance of the elements defined in the schema. The testing environment we propose requires a specialized language for writing tests of conceptual schemas. We defined the Conceptual Schema Testing Language (CSTL), which may be used to specify automated tests of executable schemas specified in UML/OCL. We also describe a prototype implementation of a test processor that makes feasible the approach in practice. The conceptual schema testing approach supports test-last validation of conceptual schemas, but it also makes sense to test incomplete conceptual schemas while they are developed. This fact lays the groundwork of Test-Driven Conceptual Modeling (TDCM), which is our second main contribution. TDCM is a novel conceptual modeling method based on the main principles of Test-Driven Development (TDD), an extreme programming method in which a software system is developed in short iterations driven by tests. We have applied the method in several case studies, in the context of Design Research, which is the general research framework we adopted. Finally, we also describe an integration approach of TDCM into a broad set of software development methodologies, including the Unified Process development methodology, MDD-based approaches, storytest-driven agile methods and goal and scenario-oriented requirements engineering methods.Els enfocaments per assegurar la qualitat deis sistemes d'informació s'han basal tradicional m en! en l'avaluació de la seva implementació. No obstan! aix6, la qualitat d'un sis tema d'informació pot ser ampliament determinada en les primeres fases del seu desenvolupament. Diversos estudis indiquen que més de la meitat deis errors de software són errors de requisits . Un error de requisit es defineix com una desalineació entre l'especificació deis requisits i les necessitats i expectatives de les parts im plicades (stakeholders ). La modelització conceptual és una activitat essencial en l'enginyeria de requisits , l'objectiu de la qual és desenvolupar !'esquema conceptual d'un sistema d'informació. L'esquema conceptual és el coneixement general que un sistema d'informació requereix per tal de desenvolupar les seves funcions . Un esquema conceptual té qualitat semantica quan és va lid i complet. La valides a implica que !'esquema sigui correcte (el coneixement definit és cert peral domini) i rellevant (el coneixement definit és necessari peral sistema). La completes a significa que !'esquema conceptual inclou tot el coneixement rellevant. La validació de !'esquema conceptual té coma objectiu la detecció d'errors de requisits per tal de millorar la qualitat semantica. La validació d'esquemes conceptuals és un repte crític en l'enginyeria de requisits . Aquesta te si contribueix a aquest repte i es basa en el fet que els es quemes conceptuals de sistemes d'informació poden ser especificats en artefactes executables i, per tant, poden ser provats. Les principals contribucions de la te si són (1) un enfocament pera les pro ves d'esquemes conceptuals de sistemes d'informació, i (2) una metodología innovadora pel desenvolupament incremental d'esquemes conceptuals assistit per una validació continuada basada en proves . Les pro ves d'esquemes conceptuals poden ser una im portant i practica técnica pera la se va validació, jaque permeten provar la correctesa i la completesa d'acord ambles necessitats i expectatives de les parts interessades. En conjunció amb la comprovació d'un conjunt basic de criteris d'adequació de les proves, també podem analitzar la rellevancia deis elements definits a !'esquema. L'entorn de test proposat inclou un llenguatge especialitzat per escriure proves automatitzades d'esquemes conceptuals, anomenat Conceptual Schema Testing Language (CSTL). També hem descrit i implementa! a un prototip de processador de tes tos que fa possible l'aplicació de l'enfocament proposat a la practica. D'acord amb l'estat de l'art en validació d'esquemes conceptuals , aquest és el primer treball que proposa i implementa un entorn pel testing automatitzat d'esquemes conceptuals definits en UML!OCL. L'enfocament de proves d'esquemes conceptuals permet dura terme la validació d'esquemes existents , pero també té sentit provar es quemes conceptuals incomplets m entre estant sent desenvolupats. Aquest fet és la base de la metodología Test-Driven Conceptual Modeling (TDCM), que és la segona contribució principal. El TDCM és una metodología de modelització conceptual basada en principis basics del Test-Driven Development (TDD), un métode de programació en el qual un sistema software és desenvolupat en petites iteracions guiades per proves. També hem aplicat el métode en diversos casos d'estudi en el context de la metodología de recerca Design Science Research. Finalment, hem proposat enfocaments d'integració del TDCM en diverses metodologies de desenvolupament de software
    corecore