81 research outputs found

    Securely extending and running low-code applications with C#

    Full text link
    Low-code development platforms provide an accessible infrastructure for the creation of software by domain experts, also called "citizen developers", without the need for formal programming education. Development is facilitated through graphical user interfaces, although traditional programming can still be used to extend low-code applications, for example when external services or complex business logic needs to be implemented that cannot be realized with the features available on a platform. Since citizen developers are usually not specifically trained in software development, they require additional support when writing code, particularly with regard to security and advanced techniques like debugging or versioning. In this thesis, several options to assist developers of low-code applications are investigated and implemented. A framework to quickly build code editor extensions is developed, and an approach to leverage the Roslyn compiler platform to implement custom static code analysis rules for low-code development platforms using the .NET platform is demonstrated. Furthermore, a sample application showing how Roslyn can be used to build a simple, integrated debugging tool, as well as an abstraction of the version control system Git for easier usage by citizen developers, is implemented. Security is a critical aspect when low-code applications are deployed. To provide an overview over possible options to ensure the secure and isolated execution of low-code applications, a threat model is developed and used as the basis for a comparison between OS-level virtualization, sandboxing, and runtime code security implementations

    Understanding and Improving Continuous Experimentation : From A/B Testing to Continuous Software Optimization

    Get PDF
    Controlled experiments (i.e. A/B tests) are used by many companies with user-intensive products to improve their software with user data. Some companies adopt an experiment-driven approach to software development with continuous experimentation (CE). With CE, every user-affecting software change is evaluated in an experiment and specialized roles seek out opportunities to experiment with functionality. The goal of the thesis is to describe current practice and support CE in industry. The main contributions are threefold. First, a review of the CE literature on: infrastructure and processes, the problem-solution pairs applied in industry practice, and the benefits and challenges of the practice. Second, a multi-case study with 12 companies to analyze how experimentation is used and why some companies fail to fully realize the benefits of CE. A theory for Factors Affecting Continuous Experimentation (FACE) is constructed to realize this goal. Finally, a toolkit called Constraint Oriented Multi-variate Bandit Optimization (COMBO) is developed for supporting automated experimentation with many variables simultaneously, live in a production environment.The research in the thesis is conducted under the design science paradigm using empirical research methods, with simulation experiments of tool proposals and a multi-case study on company usage of CE. Other research methods include systematic literature review and theory building.From FACE we derive three factors that explain CE utility: (1) investments in data infrastructure, (2) user problem complexity, and (3) incentive structures for experimentation. Guidelines are provided on how to strive towards state-of-the-art CE based on company factors. All three factors are relevant for companies wanting to use CE, in particular, for those companies wanting to apply algorithms such as those in COMBO to support personalization of software to users' context in a process of continuous optimization

    Reactive Microservices - An Experiment

    Get PDF
    Os microserviços são geralmente adotados quando a escalabilidade e flexibilidade de uma aplicação são essenciais para o seu sucesso. Apesar disto, as dependências entre serviços transmitidos através de protocolos síncronos, resultam numa única falha que pode afetar múltiplos microserviços. A adoção da capacidade de resposta numa arquitetura baseada em microserviços, através da reatividade, pode facilitar e minimizar a proliferação de erros entre serviços e na comunicação entre eles, ao dar prioridade à capacidade de resposta e à resiliência de um serviço. Esta dissertação fornece uma visão geral do estado da arte dos microserviços reativos, estruturada através de um processo de mapeamento sistemático, onde são analisados os seus atributos de qualidade mais importantes, os seus erros mais comuns, as métricas mais adequadas para a sua avaliação, e as frameworks mais relevantes. Com a informação recolhida, é apresentado o valor deste trabalho, onde a decisão do projeto e a framework a utilizar são tomadas, através da técnica de preferência de ordem por semelhança com a solução ideal e o processo de hierarquia analítica, respetivamente. Em seguida, é realizada a análise e o desenho da solução, para o respetivo projeto, onde se destacam as alterações arquiteturais necessárias para o converter num projeto de microserviços reativo. Em seguida, descreve-se a implementação da solução, começando pela configuração do projeto necessária para agilizar o processo de desenvolvimento, seguida dos principais detalhes de implementação utilizados para assegurar a reatividade e como a framework apoia e simplifica a sua implementação, finalizada pela configuração das ferramentas de métricas no projeto para apoiar os testes e a avaliação da solução. Em seguida, a validação da solução é investigada e executada com base na abordagem Goals, Questions, Metrics (GQM), para estruturar a sua análise relativamente à manutenção, escalabilidade, desempenho, testabilidade, disponibilidade, monitorabilidade e segurança, finalizada pela conclusão do trabalho global realizado, onde são listadas as contribuições, ameaças à validade e possíveis trabalhos futuros.Microservices are generally adopted when the scalability and flexibility of an application are essential to its success. Despite this, dependencies between services transmitted through synchronous protocols result in one failure, potentially affecting multiple microservices. The adoption of responsiveness in a microservices-based architecture, through reactivity, can facilitate and minimize the proliferation of errors between services and in the communication between them by prioritizing the responsiveness and resilience of a service. This dissertation provides an overview of the reactive microservices state of the art, structured through a systematic mapping process, where its most important quality attributes, pitfalls, metrics, and most relevant frameworks are analysed. With the gathered information, the value of this work is presented, where the project and framework decision are made through the technique of order preference by similarity to the ideal solution and the analytic hierarchy process, respectively. Then, the analysis and design of the solution are idealized for the respective project, where the necessary architectural changes are highlighted to convert it to a reactive microservices project. Next, the solution implementation is described, starting with the necessary project setup to speed up the development process, followed by the key implementation details employed to ensure reactivity and how the framework streamlines its implementation, finalized by the metrics tools setup in the project to support the testing and evaluation of the solution. Then, the solution validation is traced and executed based on the Goals, Questions, Metrics (GQM) approach to structure its analysis regarding maintainability, scalability, performance, testability, availability, monitorability, and security, finalized by the conclusion of the overall work done, where the contributions, threats to validity and possible future work are listed

    Runtime Adaptation of Scientific Service Workflows

    Get PDF
    Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone. This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done. The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers. When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services. The main contributions of this thesis are the following: Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert. Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected. Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture

    Design-time performance analysis of component-based real-time systems

    Get PDF
    In current real-time systems, performance metrics are one of the most challenging properties to specify, predict and measure. Performance properties depend on various factors, like environmental context, load profile, middleware, operating system, hardware platform and sharing of internal resources. Performance failures and not satisfying related requirements cause delays, cost overruns, and even abandonment of projects. In order to avoid these performancerelated project failures, the performance properties should be obtained and analyzed already at the early design phase of a project. In this thesis we employ principles of component-based software engineering (CBSE), which enable building software systems from individual components. The advantage of CBSE is that individual components can be modeled, reused and traded. The main objective of this thesis is to develop a method that enables to predict the performance properties of a system, based on the performance properties of the involved individual components. The prediction method serves rapid prototyping and performance analysis of the architecture or related alternatives, without performing the usual testing and implementation stages. The involved research questions are as follows. How should the behaviour and performance properties of individual components be specified in order to enable automated composition of these properties into an analyzable model of a complete system? How to synthesize the models of individual components into a model of a complete system in an automated way, such that the resulting system model can be analyzed against the performance properties? The thesis presents a new framework called DeepCompass, which realizes the concept of predictable assembly throughout all phases of the system design. The cornerstones of the framework are the composable models of individual software components and hardware blocks. The models are specified at the component development time and shipped in a component package. At the component composition phase, the models of the constituent components are synthesized into an executable system model. Since the thesis focuses on performance properties, we introduce performance-related types of component models, such as behaviour, performance and resource models. The dynamics of the system execution are captured in scenario models. The essential advantage of the introduced models is that, through the behaviour of individual components and scenario models, the behaviour of the complete system is synthesized in the executable system model. Further simulation-based analysis of the obtained executable system model provides application-specific and system-specific performance property values. To support the performance analysis, we have developed a CARAT software toolkit that provides and automates the algorithms for model synthesis and simulation. Besides this, the toolkit provides graphical tools for designing alternative architectures and visualization of obtained performance properties. We have conducted an empirical case study on the use of scenarios in the industry to analyze the system performance at the early design phase. It was found that industrial architects make extensive use of scenarios for performance evaluation. Based on the inputs of the architects, we have provided a set of guidelines for identification and use of performance-critical scenarios. At the end of this thesis, we have validated the DeepCompass framework by performing three case studies on performance prediction of real-time systems: an MPEG-4 video decoder, a Car Radio Navigation system and a JPEG application. For each case study, we have constructed models of the individual components, defined the SW/HW architecture, and used the CARAT toolkit to synthesize and simulate the executable system model. The simulation provided the predicted performance properties, which we later compared with the actual performance properties of the realized systems. With respect to resource usage properties and average task latencies, the variation of the prediction error showed to be within 30% of the actual performance. Concerning the pick loads on the processor nodes, the actual values were sometimes three times larger than the predicted values. As a conclusion, the framework has proven to be effective in rapid architecture prototyping and performance analysis of a complete system. This is valid, as in the case studies we have spent not more than 4-5 days on the average for the complete iteration cycle, including the design of several architecture alternatives. The framework can handle different architectural styles, which makes it widely applicable. A conceptual limitation of the framework is that it assumes that the models of individual components are already available at the design phase

    Automating System-Level Data-Interchange Software Through a System Interface Description Language

    Get PDF
    RÉSUMÉ Les plates-formes d'aujourd'hui, telles que les simulateurs de missions (FMS), présentent un niveau sans précédent d'intégration de systèmes matériels et logiciels. Dans ce contexte, les intégrateurs de systèmes sont confrontés à une hétérogénéité d'interfaces système qui doivent être alignées et reliées ensemble afin de fournir les capacités prévues d'une plate-forme. Le seul aspect des échanges de données système est problématique allant de données désalignées jusqu'à des environnements multi-architecturaux utilisant différents types de protocoles de communication. Les intégrateurs sont également confrontés à des défis similaires lors de l'interaction de multiples plates-formes ensemble à travers des environnements de simulation distribuée où chaque plate-forme peut être considérée comme un système avec sa propre interface distincte. D'autre part, permettre la réutilisation de système à travers diverses plates-formes en support aux gammes de produits est un défi pour les fournisseurs de systèmes, car ils doivent adapter leurs interfaces système à des plates-formes hétérogènes faisant donc face aux mêmes difficultés que les intégrateurs. En outre, l'introduction de modifications aux interfaces système afin de répondre aux besoins tardifs d'affaires, ou à des contraintes de performance imprévues, par exemple, est d'autant plus ardue que leurs impacts sont difficiles à prévoir et que leurs effets sont souvent décelés tard dans le processus d'intégration. En conséquence, cette thèse aborde la nécessité de simplifier l'intégration et l'interopérabilité système afin de réduire leurs coûts associés et d'accroître leur efficacité ainsi que leur efficience. Elle est destinée à apporter de nouvelles avancées dans les domaines de l'intégration système et de l'interopérabilité système. Notamment, en établissant une taxonomie commune, et en augmentant la compréhension des interfaces système, des divers aspects impactant les échanges de données système, des considérations des environnements multi-architecturaux, ainsi que des facteurs permettant la gouvernance d'interface ainsi que de la réutilisation système. À cette fin, deux objectifs de recherche ont été formulés. Le premier objectif vise à définir un langage utilisé pour décrire les interfaces système et les divers aspects entourant leurs échanges de données. Par conséquent, trois aspects principaux sont étudiés relatifs aux interfaces système: les éléments de langage pertinents utilisés pour les décrire, la modélisation des interfaces système avec ce langage, et la capture des considérations multi-architecturales. Le second objectif vise à définir une méthode pour automatiser le logiciel responsable des échanges de données système comme moyen pour simplifier les tâches impliquées dans l'intégration et l'interopérabilité système. Par conséquent, les compilateurs de modèles et les techniques de génération de code sont étudiés. La démonstration de ces objectifs apporte de nouvelles avancées dans l'état de l'art de l'intégration système et de l'interopérabilité système. Notamment, ceci culmine en un nouveau langage de description d'interface système, SIDL, utilisé pour capturer les interfaces système et les divers aspects entourant leurs échanges de données, ainsi qu'en une nouvelle méthode pour automatiser le logiciel d'échange de données au niveau système à partir des interfaces systèmes capturées dans ce langage. L'avènement de SIDL contribue également une nouvelle taxonomie fournissant une perspective complète sur l'interopérabilité système ainsi qu'en un langage commun qui peut être partagé entre les parties prenantes, tels que les intégrateurs, les fournisseurs et les experts système. Étant agnostique aux architectures, SIDL fournit un seul point de vue architectural supervisant toutes les interfaces système et capture les considérations multi-architecturales ce qui n'a jamais été réalisé avant ce travail. D'autant plus, un générateur de code SIDL est introduit présentant la nouveauté de générer le logiciel d'échange de données à partir d'un bassin plus riche d'information, notamment à partir des relations système de haut niveau allant jusqu'au bas niveau couvrant les détails protocolaires et d'encodage. En raison des considérations multi-architecturales qui sont capturées nativement dans SIDL, ceci permet au générateur de code d'être agnostique aux architectures le rendant réutilisable dans d'autres contextes. Cette thèse ouvre également la voie à de futures recherches bâtissant sur ses contributions. Elle propose même une vision pour le développement d'applications logicielles avec comme objectif final de repousser encore plus loin les limites de la simplification et de l'automatisation des tâches liées à l'intégration et à l'interopérabilité système.----------ABSTRACT Today’s platforms, such as full mission simulators (FMSs), exhibit an unprecedented level of hardware and software system integration. In this context, system integrators face heterogeneous system interfaces which need to be aligned and interconnected together in order to deliver a platform's intended capabilities. The sole aspect of the data systems exchange is problematic ranging from data misalignment up to multi-architecture environments over varying kinds of communication protocols. Similar challenges are also faced by integrators when interoperating multiple platforms together through distributed simulation environments where each platform can be seen as a system with its own distinct interface. On the other hand, enabling system reuse across multiple platforms for product line support is challenging for system suppliers, as they need to adapt system interfaces to heterogeneous platforms therefore facing similar challenges as integrators. Furthermore, the introduction of system interface changes in order to respond to late business needs, or unforeseen performance constraints for instance, is even more arduous as impacts are challenging to predict and their effect are often found late into the integration process. Consequently, this thesis tackles the need to simplify system integration and interoperability in order to reduce their associated costs and increase their effectiveness along with their efficiency. It is meant to bring new advances in the fields of system integration and system interoperability. Notably, by establishing a common taxonomy, and by increasing the understanding of system interfaces, the various aspects impacting system data exchanges, multi-architecture environment considerations, and the factors enabling interface governance as well as system reuse. To this end, two research objectives have been formulated. The first objective aims at defining a language used to describe system interfaces and the various aspects surrounding their data exchanges. Therefore, three key aspects are studied relating to system interfaces: the relevant language elements used to describe them, modeling system interfaces with the language, and capturing multi-architecture considerations. The second objective aims at defining a method to automate the software responsible for system data exchanges as a way of simplifying the tasks involved in system integration and interoperability. Therefore, model compilers and code generation techniques are studied. The demonstration of these objectives brings new advances in the state of the art of system integration and system interoperability. Notably, this culminates in a novel system interface description language, SIDL, used to capture system interfaces and the various aspects surrounding their data exchanges, as well as a new method for automating the system-level data-interchange software from system interfaces captured in this language. The advent of SIDL also contributes a new taxonomy providing a comprehensive perspective over system interoperability as well as a common language which can be shared amongst stakeholders, such as integrators, suppliers, and system experts. Being architecture-agnostic, SIDL provides a single architectural viewpoint overseeing all system interfaces and capturing multi-architecture considerations which was never achieved prior to this work. Furthermore, a SIDL code generator is introduced which has the novelty of generating the data-interchange software from a richer pool of information, notably from the high-level system relationships down to the low-level protocol and encoding details. Because multi-architecture considerations are captured natively in SIDL, this enables the code generator to be architecture-agnostic making it reusable in other contexts. This thesis also paves the way for future research building upon its contributions. It even proposes a vision for software application development with the end goal being to push further the boundaries of simplifying and automating the tasks involved in system integration and interoperability

    On Experimentation in Software-Intensive Systems

    Get PDF
    Context: Delivering software that has value to customers is a primary concern of every software company. Prevalent in web-facing companies, controlled experiments are used to validate and deliver value in incremental deployments. At the same that web-facing companies are aiming to automate and reduce the cost of each experiment iteration, embedded systems companies are starting to adopt experimentation practices and leverage their activities on the automation developments made in the online domain. Objective: This thesis has two main objectives. The first objective is to analyze how software companies can run and optimize their systems through automated experiments. This objective is investigated from the perspectives of the software architecture, the algorithms for the experiment execution and the experimentation process. The second objective is to analyze how non web-facing companies can adopt experimentation as part of their development process to validate and deliver value to their customers continuously. This objective is investigated from the perspectives of the software development process and focuses on the experimentation aspects that are distinct from web-facing companies. Method: To achieve these objectives, we conducted research in close collaboration with industry and used a combination of different empirical research methods: case studies, literature reviews, simulations, and empirical evaluations. Results: This thesis provides six main results. First, it proposes an architecture framework for automated experimentation that can be used with different types of experimental designs in both embedded systems and web-facing systems. Second, it proposes a new experimentation process to capture the details of a trustworthy experimentation process that can be used as the basis for an automated experimentation process. Third, it identifies the restrictions and pitfalls of different multi-armed bandit algorithms for automating experiments in industry. This thesis also proposes a set of guidelines to help practitioners select a technique that minimizes the occurrence of these pitfalls. Fourth, it proposes statistical models to analyze optimization algorithms that can be used in automated experimentation. Fifth, it identifies the key challenges faced by embedded systems companies when adopting controlled experimentation, and we propose a set of strategies to address these challenges. Sixth, it identifies experimentation techniques and proposes a new continuous experimentation model for mission-critical and business-to-business. Conclusion: The results presented in this thesis indicate that the trustworthiness in the experimentation process and the selection of algorithms still need to be addressed before automated experimentation can be used at scale in industry. The embedded systems industry faces challenges in adopting experimentation as part of its development process. In part, this is due to the low number of users and devices that can be used in experiments and the diversity of the required experimental designs for each new situation. This limitation increases both the complexity of the experimentation process and the number of techniques used to address this constraint

    Code smells detection and visualization: A systematic literature review

    Full text link
    Context: Code smells (CS) tend to compromise software quality and also demand more effort by developers to maintain and evolve the application throughout its life-cycle. They have long been catalogued with corresponding mitigating solutions called refactoring operations. Objective: This SLR has a twofold goal: the first is to identify the main code smells detection techniques and tools discussed in the literature, and the second is to analyze to which extent visual techniques have been applied to support the former. Method: Over 83 primary studies indexed in major scientific repositories were identified by our search string in this SLR. Then, following existing best practices for secondary studies, we applied inclusion/exclusion criteria to select the most relevant works, extract their features and classify them. Results: We found that the most commonly used approaches to code smells detection are search-based (30.1%), and metric-based (24.1%). Most of the studies (83.1%) use open-source software, with the Java language occupying the first position (77.1%). In terms of code smells, God Class (51.8%), Feature Envy (33.7%), and Long Method (26.5%) are the most covered ones. Machine learning techniques are used in 35% of the studies. Around 80% of the studies only detect code smells, without providing visualization techniques. In visualization-based approaches several methods are used, such as: city metaphors, 3D visualization techniques. Conclusions: We confirm that the detection of CS is a non trivial task, and there is still a lot of work to be done in terms of: reducing the subjectivity associated with the definition and detection of CS; increasing the diversity of detected CS and of supported programming languages; constructing and sharing oracles and datasets to facilitate the replication of CS detection and visualization techniques validation experiments.Comment: submitted to ARC
    • …
    corecore