529 research outputs found

    Supporting process model validation through natural language generation

    Get PDF
    The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation

    The Role of a Microservice Architecture on cybersecurity and operational resilience in critical systems

    Get PDF
    Critical systems are characterized by their high degree of intolerance to threats, in other words, their high level of resilience, because depending on the context in which the system is inserted, the slightest failure could imply significant damage, whether in economic terms, or loss of reputation, of information, of infrastructure, of the environment, or human life. The security of such systems is traditionally associated with legacy infrastructures and data centers that are monolithic, which translates into increasingly high evolution and protection challenges. In the current context of rapid transformation where the variety of threats to systems has been consistently increasing, this dissertation aims to carry out a compatibility study of the microservice architecture, which is denoted by its characteristics such as resilience, scalability, modifiability and technological heterogeneity, being flexible in structural adaptations, and in rapidly evolving and highly complex settings, making it suited for agile environments. It also explores what response artificial intelligence, more specifically machine learning, can provide in a context of security and monitorability when combined with a simple banking system that adopts the microservice architecture.Os sistemas críticos são caracterizados pelo seu elevado grau de intolerância às ameaças, por outras palavras, o seu alto nível de resiliência, pois dependendo do contexto onde se insere o sistema, a mínima falha poderá implicar danos significativos, seja em termos económicos, de perda de reputação, de informação, de infraestrutura, de ambiente, ou de vida humana. A segurança informática de tais sistemas está tradicionalmente associada a infraestruturas e data centers legacy, ou seja, de natureza monolítica, o que se traduz em desafios de evolução e proteção cada vez mais elevados. No contexto atual de rápida transformação, onde as variedades de ameaças aos sistemas têm vindo consistentemente a aumentar, esta dissertação visa realizar um estudo de compatibilidade da arquitetura de microserviços, que se denota pelas suas caraterísticas tais como a resiliência, escalabilidade, modificabilidade e heterogeneidade tecnológica, sendo flexível em adaptações estruturais, e em cenários de rápida evolução e elevada complexidade, tornando-a adequada a ambientes ágeis. Explora também a resposta que a inteligência artificial, mais concretamente, machine learning, pode dar num contexto de segurança e monitorabilidade quando combinado com um simples sistema bancário que adota uma arquitetura de microserviços

    Content diffusion in ALERT clinical applications

    Get PDF
    Estágio realizado na ALERT e orientado pelo Eng.º Tiago SilvaTese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK

    Get PDF
    Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs. This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains

    Role-based Data Management

    Get PDF
    Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration

    Virtual Platform-Based Design Space Exploration of Power-Efficient Distributed Embedded Applications

    Get PDF
    Networked embedded systems are essential building blocks of a broad variety of distributed applications ranging from agriculture to industrial automation to healthcare and more. These often require specific energy optimizations to increase the battery lifetime or to operate using energy harvested from the environment. Since a dominant portion of power consumption is determined and managed by software, the software development process must have access to the sophisticated power management mechanisms provided by state-of-the-art hardware platforms to achieve the best tradeoff between system availability and reactivity. Furthermore, internode communications must be considered to properly assess the energy consumption. This article describes a design flow based on a SystemC virtual platform including both accurate power models of the hardware components and a fast abstract model of the wireless network. The platform allows both model-driven design of the application and the exploration of power and network management alternatives. These can be evaluated in different network scenarios, allowing one to exploit power optimization strategies without requiring expensive field trials. The effectiveness of the approach is demonstrated via experiments on a wireless body area network application

    Testing and test-driven development of conceptual schemas

    Get PDF
    The traditional focus for Information Systems (IS) quality assurance relies on the evaluation of its implementation. However, the quality of an IS can be largely determined in the first stages of its development. Several studies reveal that more than half the errors that occur during systems development are requirements errors. A requirements error is defined as a mismatch between requirements specification and stakeholders¿ needs and expectations. Conceptual modeling is an essential activity in requirements engineering aimed at developing the conceptual schema of an IS. The conceptual schema is the general knowledge that an IS needs to know in order to perform its functions. A conceptual schema specification has semantic quality when it is valid and complete. Validity means that the schema is correct (the knowledge it defines is true for the domain) and relevant (the knowledge it defines is necessary for the system). Completeness means that the conceptual schema includes all relevant knowledge. The validation of a conceptual schema pursues the detection of requirements errors in order to improve its semantic quality. Conceptual schema validation is still a critical challenge in requirements engineering. In this work we contribute to this challenge, taking into account that, since conceptual schemas of IS can be specified in executable artifacts, they can be tested. In this context, the main contributions of this Thesis are (1) an approach to test conceptual schemas of information systems, and (2) a novel method for the incremental development of conceptual schemas supported by continuous test-driven validation. As far as we know, this is the first work that proposes and implements an environment for automated testing of UML/OCL conceptual schemas, and the first work that explores the use of test-driven approaches in conceptual modeling. The testing of conceptual schemas may be an important and practical means for their validation. It allows checking correctness and completeness according to stakeholders¿ needs and expectations. Moreover, in conjunction with the automatic check of basic test adequacy criteria, we can also analyze the relevance of the elements defined in the schema. The testing environment we propose requires a specialized language for writing tests of conceptual schemas. We defined the Conceptual Schema Testing Language (CSTL), which may be used to specify automated tests of executable schemas specified in UML/OCL. We also describe a prototype implementation of a test processor that makes feasible the approach in practice. The conceptual schema testing approach supports test-last validation of conceptual schemas, but it also makes sense to test incomplete conceptual schemas while they are developed. This fact lays the groundwork of Test-Driven Conceptual Modeling (TDCM), which is our second main contribution. TDCM is a novel conceptual modeling method based on the main principles of Test-Driven Development (TDD), an extreme programming method in which a software system is developed in short iterations driven by tests. We have applied the method in several case studies, in the context of Design Research, which is the general research framework we adopted. Finally, we also describe an integration approach of TDCM into a broad set of software development methodologies, including the Unified Process development methodology, MDD-based approaches, storytest-driven agile methods and goal and scenario-oriented requirements engineering methods.Els enfocaments per assegurar la qualitat deis sistemes d'informació s'han basal tradicional m en! en l'avaluació de la seva implementació. No obstan! aix6, la qualitat d'un sis tema d'informació pot ser ampliament determinada en les primeres fases del seu desenvolupament. Diversos estudis indiquen que més de la meitat deis errors de software són errors de requisits . Un error de requisit es defineix com una desalineació entre l'especificació deis requisits i les necessitats i expectatives de les parts im plicades (stakeholders ). La modelització conceptual és una activitat essencial en l'enginyeria de requisits , l'objectiu de la qual és desenvolupar !'esquema conceptual d'un sistema d'informació. L'esquema conceptual és el coneixement general que un sistema d'informació requereix per tal de desenvolupar les seves funcions . Un esquema conceptual té qualitat semantica quan és va lid i complet. La valides a implica que !'esquema sigui correcte (el coneixement definit és cert peral domini) i rellevant (el coneixement definit és necessari peral sistema). La completes a significa que !'esquema conceptual inclou tot el coneixement rellevant. La validació de !'esquema conceptual té coma objectiu la detecció d'errors de requisits per tal de millorar la qualitat semantica. La validació d'esquemes conceptuals és un repte crític en l'enginyeria de requisits . Aquesta te si contribueix a aquest repte i es basa en el fet que els es quemes conceptuals de sistemes d'informació poden ser especificats en artefactes executables i, per tant, poden ser provats. Les principals contribucions de la te si són (1) un enfocament pera les pro ves d'esquemes conceptuals de sistemes d'informació, i (2) una metodología innovadora pel desenvolupament incremental d'esquemes conceptuals assistit per una validació continuada basada en proves . Les pro ves d'esquemes conceptuals poden ser una im portant i practica técnica pera la se va validació, jaque permeten provar la correctesa i la completesa d'acord ambles necessitats i expectatives de les parts interessades. En conjunció amb la comprovació d'un conjunt basic de criteris d'adequació de les proves, també podem analitzar la rellevancia deis elements definits a !'esquema. L'entorn de test proposat inclou un llenguatge especialitzat per escriure proves automatitzades d'esquemes conceptuals, anomenat Conceptual Schema Testing Language (CSTL). També hem descrit i implementa! a un prototip de processador de tes tos que fa possible l'aplicació de l'enfocament proposat a la practica. D'acord amb l'estat de l'art en validació d'esquemes conceptuals , aquest és el primer treball que proposa i implementa un entorn pel testing automatitzat d'esquemes conceptuals definits en UML!OCL. L'enfocament de proves d'esquemes conceptuals permet dura terme la validació d'esquemes existents , pero també té sentit provar es quemes conceptuals incomplets m entre estant sent desenvolupats. Aquest fet és la base de la metodología Test-Driven Conceptual Modeling (TDCM), que és la segona contribució principal. El TDCM és una metodología de modelització conceptual basada en principis basics del Test-Driven Development (TDD), un métode de programació en el qual un sistema software és desenvolupat en petites iteracions guiades per proves. També hem aplicat el métode en diversos casos d'estudi en el context de la metodología de recerca Design Science Research. Finalment, hem proposat enfocaments d'integració del TDCM en diverses metodologies de desenvolupament de software

    Model-driven generative programming for BIS mobile applications

    Get PDF
    The burst on the availability of smart phones based on the Android platform calls for cost-effective techniques to generate mobile apps for general purpose, distributed business information systems (BIS). To mitigate this problem our research aims at applying model-driven techniques to automatically generate usable prototypes with a sound, maintainable, architecture. Following three base principles: model-based generation, separation of concerns, paradigm seamlessness, we try to answer the main guiding question – how to reduce development time and cost by transforming a given domain model into an Android application? To answer this question we propose to develop an application that follows a generative approach for mobile BIS apps that will mitigate the identified problems. Its input is a platform independent model (PIM), with business rules specified in OCL (Object Constraint Language). We adopted the Design Science Research methodology, that helps gaining problem understanding, identifying systemically appropriate solutions, and in effectively evaluating new and innovative solutions. To better evaluate our solution, besides resorting to third party tools to test specific components integration, we demonstrated its usage and evaluated how well it mitigates a subset of the identified problems in an observational study (we presented our generated apps to an outside audience in a controlled environment to study our model-based centered and, general apps understandability) and communicated its effectiveness to researchers and practitioners.O grande surto de disponibilidade de dispositivos móveis para a plataforma Android requer, técnicas generativas de desenvolvimento de aplicações para sistemas comuns e/ou distribuídos de informação empresariais/negócio, que otimizem a relação custo-benefício. Para mitigar este problema, esta investigação visa aplicar técnicas orientadas a modelos para, automaticamente, gerar protótipos funcionais de aplicações com uma arquitetura robusta e fácil de manter. Seguindo para tal três princípios base: geração baseada no modelo, separação de aspetos, desenvolvimento sem soturas (sem mudança de paradigma), tentamos dar resposta à pergunta orientadora – como reduzir o tempo e custo de desenvolvimento de uma aplicação Android por transformação de um dado modelo de domínio? De modo a responder a esta questão nós propomos desenvolver uma aplicação que segue uma abordagem generativa para aplicações de informação empresariais/negócio móveis de modo a mitigar os problemas identificados. Esta recebe modelos independentes de plataforma (PIM), com regras de negócio especificadas em OCL (Object Constraint Language). Seguimos a metodologia Design Science Research que ajuda a identificar e perceber o problema, a identificar sistematicamente soluções apropriadas aos problemas e a avaliar mais eficientemente soluções novas e inovadoras. Para melhor avaliar a nossa solução, apesar de recorrermos a ferramentas de terceiros para testar a integração de componentes específicos, também demonstramos a sua utilização, através de estudos experimentais (em um ambiente controlado, apresentamos as nossas aplicações geradas a uma audiência externa que nos permitiu estudar a compreensibilidade baseada e centrada em modelos e, de um modo geral, das aplicações) avaliamos o quanto esta mitiga um subconjunto de problemas identificados e comunicamos a sua eficácia para investigadores e profissionais

    HeTM: Transactional Memory for Heterogeneous Systems

    Full text link
    Modern heterogeneous computing architectures, which couple multi-core CPUs with discrete many-core GPUs (or other specialized hardware accelerators), enable unprecedented peak performance and energy efficiency levels. Unfortunately, though, developing applications that can take full advantage of the potential of heterogeneous systems is a notoriously hard task. This work takes a step towards reducing the complexity of programming heterogeneous systems by introducing the abstraction of Heterogeneous Transactional Memory (HeTM). HeTM provides programmers with the illusion of a single memory region, shared among the CPUs and the (discrete) GPU(s) of a heterogeneous system, with support for atomic transactions. Besides introducing the abstract semantics and programming model of HeTM, we present the design and evaluation of a concrete implementation of the proposed abstraction, which we named Speculative HeTM (SHeTM). SHeTM makes use of a novel design that leverages on speculative techniques and aims at hiding the inherently large communication latency between CPUs and discrete GPUs and at minimizing inter-device synchronization overhead. SHeTM is based on a modular and extensible design that allows for easily integrating alternative TM implementations on the CPU's and GPU's sides, which allows the flexibility to adopt, on either side, the TM implementation (e.g., in hardware or software) that best fits the applications' workload and the architectural characteristics of the processing unit. We demonstrate the efficiency of the SHeTM via an extensive quantitative study based both on synthetic benchmarks and on a porting of a popular object caching system.Comment: The current work was accepted in the 28th International Conference on Parallel Architectures and Compilation Techniques (PACT'19

    An integrated hardware/software design methodology for signal processing systems

    Get PDF
    This paper presents a new methodology for design and implementation of signal processing systems on system-on-chip (SoC) platforms. The methodology is centered on the use of lightweight application programming interfaces for applying principles of dataflow design at different layers of abstraction. The development processes integrated in our approach are software implementation, hardware implementation, hardware-software co-design, and optimized application mapping. The proposed methodology facilitates development and integration of signal processing hardware and software modules that involve heterogeneous programming languages and platforms. As a demonstration of the proposed design framework, we present a dataflow-based deep neural network (DNN) implementation for vehicle classification that is streamlined for real-time operation on embedded SoC devices. Using the proposed methodology, we apply and integrate a variety of dataflow graph optimizations that are important for efficient mapping of the DNN system into a resource constrained implementation that involves cooperating multicore CPUs and field-programmable gate array subsystems. Through experiments, we demonstrate the flexibility and effectiveness with which different design transformations can be applied and integrated across multiple scales of the targeted computing system
    corecore