1,589 research outputs found

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Towards the Automation of Migration and Safety of Third-Party Libraries

    Get PDF
    The process of migration from one library to a new, different library is very complex. Typically, the developer needs to find functions in the new library that are most adequate in replacing the functions of the retired library. This process is subjective and time-consuming as the developer needs to fully understand the documentation of both libraries to be able to migrate from an old library to a new one and find the right matching function(s) if exists. Our goal is helping the developer to have better experiences with library migration by identifying the key problems related to this process. Based on our critical literature review, we identified three main challenges related to the automation of library migration: (1) the mining of existing migrations, (2) learning from these migrations to recommend them in similar contexts, and (3) guaranteeing the safety of the recommended migrations

    Guidelines for Testing Microservice-based Applications

    Get PDF
    Há uma tendência no desenvolvimento de software de adotar uma arquitetura baseada em microserviços. Apesar de vários benefícios como maior modularização, escalabilidade e manutenibilidade, esta abordagem levanta outros desafios para a organização. Ao aplicar este padrão de arquitetura, a estratégia de teste precisa de ser ajustada. Um sistema baseado em microserviços é inerentemente distribuído e pressupõe que os vários serviços estejam em constante comunicação entre si, através de conexões de rede, para responder aos requisitos de negócio. Testar um microserviço por si só é mais fácil, pois este está naturalmente isolado do resto do sistema, mas a execução de testes de integração torna-se mais complexa. A utilização de microserviços também oferece várias opções sobre onde e o que testar. Este trabalho tem o objetivo de estudar, comparar e sistematizar soluções e abordagens atuais para o desenvolvimento de testes em sistemas baseados em microserviços e propor um conjunto de diretrizes, métodos e boas práticas universais para facilitar o seu processo de testagem, ajudando as organizações a produzir testes com qualidade, mais valiosos e com menos custos. De modo a perceber os problemas e desafios enfrentados a testar microserviços, um projeto em forma de prova de conceito (PoC) e utilizando uma arquitetura baseadas em microserviços foi planeado, desenhado e testes, relativos a alguns casos de uso foram investigados. Também foram sugeridos um conjunto de indicadores que pretendem medir a qualidade e valor da estratégia de testes. Para cada indicador foi proposto onde pode ser recolhido, um racional com a explicação do seu propósito e uma escala de medida. Este trabalho concluiu que, apesar da existência de estratégias e frameworks de testes capazes de ajudar as organizações a testar as suas aplicações corretamente, é necessária a mentalidade certa para atingir uma estratégia de testes de qualidade. Deste modo, este trabalho propõe um conjunto de recomendações e boas práticas que promovem a mentalidade correta para desenhar e implementar testes sobre todas as camadas do sistema. São também sugeridos passos a seguir para definir e decompor cenários de teste, e soluções para os vários tipos de testes estudados. Assim, este trabalho pode também ser considerado uma base de conhecimento na área de testes em microserviços e ajudar a acelerar a sua adoção.There’s a trend in software development to adopt a microservice-based architecture. Despite several benefits such as increased modularization, scalability and maintainability, this approach brings other challenges to the table. When applying this architectural pattern, the testing strategy needs to be adapted. A microservice-based application presupposes that the various services that compose the system are communication with each other, across network boundaries, to fulfil business requirements and is inherently distributed. Testing a microservice by itself is easier, as it is naturally isolated from the rest of the system, but integration testing becomes more challenging. Microservices also offer several options about where and what to test. This work focus on studying, comparing, and systemizing current solutions and approaches for testing in microservice-based systems and proposing a set of universal guidelines, methods, and best practices to facilitate microservice-based application testing, helping organizations produce more valuable and quality tests with less costs. To understand the problems and challenges presented by microservices testing, a proof-ofconcept (PoC) project, using a microservice-based architecture, was designed and tests for some use cases were explored. Furthermore, indicators to measure test quality and value were proposed, describing it source, rational and measurement scale. This works concludes that, although many testing approaches and frameworks exist that can help organizations test their applications correctly, they need to be used with the right mindset. To achieved this, this work proposes a set of guidelines and best practices that promote the right mindset for designing and implementation tests at all system layers. It also proposes a workflow for test definition and decomposition, and solutions for the various studied testing types

    StubCoder: Automated Generation and Repair of Stub Code for Mock Objects

    Full text link
    Mocking is an essential unit testing technique for isolating the class under test (CUT) from its dependencies. Developers often leverage mocking frameworks to develop stub code that specifies the behaviors of mock objects. However, developing and maintaining stub code is labor-intensive and error-prone. In this paper, we present StubCoder to automatically generate and repair stub code for regression testing. StubCoder implements a novel evolutionary algorithm that synthesizes test-passing stub code guided by the runtime behavior of test cases. We evaluated our proposed approach on 59 test cases from 13 open-source projects. Our evaluation results show that StubCoder can effectively generate stub code for incomplete test cases without stub code and repair obsolete test cases with broken stub code.Comment: This paper was accepted by the ACM Transactions on Software Engineering and Methodology (TOSEM) in July 202

    Automatically Documenting Software Artifacts

    Get PDF
    Software artifacts, such as database schema and unit test cases, constantly change during evolution and maintenance of software systems. Co-evolution of code and DB schemas in Database-Centric Applications (DCAs) often leads to two types of challenging scenarios for developers, where (i) changes to the DB schema need to be incorporated in the source code, and (ii) maintenance of a DCAs code requires understanding of how the features are implemented by relying on DB operations and corresponding schema constraints. On the other hand, the number of unit test cases often grows as new functionality is introduced into the system, and maintaining these unit tests is important to reduce the introduction of regression bugs due to outdated unit tests. Therefore, one critical artifact that developers need to be able to maintain during evolution and maintenance of software systems is up-to-date and complete documentation. In order to understand developer practices regarding documenting and maintaining these software artifacts, we designed two empirical studies both composed of (i) an online survey of contributors of open source projects and (ii) a mining-based analysis of method comments in these projects. We observed that documenting methods with database accesses and unit test cases is not a common practice. Further, motivated by the findings of the studies, we proposed three novel approaches: (i) DBScribe is an approach for automatically documenting database usages and schema constraints, (ii) UnitTestScribe is an approach for automatically documenting test cases, and (iii) TeStereo tags stereotypes for unit tests and generates html reports to improve the comprehension and browsing of unit tests in a large test suite. We evaluated our tools in the case studies with industrial developers and graduate students. In general, developers indicated that descriptions generated by the tools are complete, concise, and easy to read. The reports are useful for source code comprehension tasks as well as other tasks, such as code smell detection and source code navigation

    Automatically generating complex test cases from simple ones

    Get PDF
    While source code expresses and implements design considerations for software system, test cases capture and represent the domain knowledge of software developer, her assumptions on the implicit and explicit interaction protocols in the system, and the expected behavior of different modules of the system in normal and exceptional conditions. Moreover, test cases capture information about the environment and the data the system operates on. As such, together with the system source code, test cases integrate important system and domain knowledge. Besides being an important project artifact, test cases embody up to the half the overall software development cost and effort. Software projects produce many test cases of different kind and granularity to thoroughly check the system functionality, aiming to prevent, detect, and remove different types of faults. Simple test cases exercise small parts of the system aiming to detect faults in single modules. More complex integration and system test cases exercise larger parts of the system aiming to detect problems in module interactions and verify the functionality of the system as a whole. Not surprisingly, the test case complexity comes at a cost -- developing complex test cases is a laborious and expensive task that is hard to automate. Our intuition is that important information that is naturally present in test cases can be reused to reduce the effort in generation of new test cases. This thesis develops this intuition and investigates the phenomenon of information reuse among test cases. We first empirically investigated many test cases from real software projects and demonstrated that test cases of different granularity indeed share code fragments and build upon each other. Then we proposed an approach for automatically generating complex test cases by extracting and exploiting information in existing simple ones. In particular, our approach automatically generates integration test cases from unit ones. We implemented our approach in a prototype to evaluate its ability to generate new and useful test cases for real software systems. Our studies show that test cases generated with our approach reveal new interaction faults even in well tested applications. We evaluated the effectiveness of our approach by comparing it with the state of the art test generation techniques. The evaluation results show that our approach is effective, it finds relevant faults differently from other approaches that tend to find different and usually less relevant faults

    Automating Software Development for Mobile Computing Platforms

    Get PDF
    Mobile devices such as smartphones and tablets have become ubiquitous in today\u27s computing landscape. These devices have ushered in entirely new populations of users, and mobile operating systems are now outpacing more traditional desktop systems in terms of market share. The applications that run on these mobile devices (often referred to as apps ) have become a primary means of computing for millions of users and, as such, have garnered immense developer interest. These apps allow for unique, personal software experiences through touch-based UIs and a complex assortment of sensors. However, designing and implementing high quality mobile apps can be a difficult process. This is primarily due to challenges unique to mobile development including change-prone APIs and platform fragmentation, just to name a few. in this dissertation we develop techniques that aid developers in overcoming these challenges by automating and improving current software design and testing practices for mobile apps. More specifically, we first introduce a technique, called Gvt, that improves the quality of graphical user interfaces (GUIs) for mobile apps by automatically detecting instances where a GUI was not implemented to its intended specifications. Gvt does this by constructing hierarchal models of mobile GUIs from metadata associated with both graphical mock-ups (i.e., created by designers using photo-editing software) and running instances of the GUI from the corresponding implementation. Second, we develop an approach that completely automates prototyping of GUIs for mobile apps. This approach, called ReDraw, is able to transform an image of a mobile app GUI into runnable code by detecting discrete GUI-components using computer vision techniques, classifying these components into proper functional categories (e.g., button, dropdown menu) using a Convolutional Neural Network (CNN), and assembling these components into realistic code. Finally, we design a novel approach for automated testing of mobile apps, called CrashScope, that explores a given android app using systematic input generation with the intrinsic goal of triggering crashes. The GUI-based input generation engine is driven by a combination of static and dynamic analyses that create a model of an app\u27s GUI and targets common, empirically derived root causes of crashes in android apps. We illustrate that the techniques presented in this dissertation represent significant advancements in mobile development processes through a series of empirical investigations, user studies, and industrial case studies that demonstrate the effectiveness of these approaches and the benefit they provide developers

    A cloud, microservice-based digital twin for the Oil industry a flow assurance case study

    Get PDF
    Digital Twins are one of the top recent technology trends, considered a front-runner in enabling the next generation of intelligent, digitalized industries. A Digital Twin is basi cally a real enterprise asset mirrored into the virtual world, created entirely in software, and utilized to innovate business, generate new revenue, and create value-producing op portunities. Naturally, building such demanding systems is not an easy task. Many organi zational and technological concerns should be addressed, like real-time processing, data management, cybersecurity, and low latency communication. Moreover, Digital Twins should be extensible and support data integration with its physical counterpart, allowing enterprises to streamline their operations more safely. Considering there are no standard methodologies or technologies to design a Digital Twin, software developers struggle to develop a robust, reliable Digital Twin architecture that meets customers’ needs. Therefore, in consideration of these technical challenges, we decided to explore solutions in the oil and gas industry. Oil plants are known for having notoriously complex pro cesses and an oppressive ecosystem when it comes to the adoption of new technologies, especially due to its intricate and ever-changing landscape. Despite these difficulties, this industry would certainly benefit from a well-designed Digital Twin. Hence, this work proposes a solution for an Oil Well Digital Twin architecture. We decided to use the flow assurance process of an Oil Well as a base to build a Digital Twin, aiming to study and fulfill those requirements and technical limitations. We implemented and tested several APIs, each one representing the virtual version of a real Oil Well component, enforcing the importance of microservices and cloud-native patterns in its design. When function ing together, these microservices comprise an entire Oil Well Digital twin solution, cov ering basic storage, communication, and monitoring fundamentals. This proof of concept demonstrates how a complex asset — like an Oil Well — can be built and organized in a completely digital manner. Lastly, this design is extensible and opens up further develop ment opportunities.Gêmeos Digitais são uma das principais tendências tecnológicas recentes, considerados pioneiros na habilitação da próxima geração de indústrias inteligentes e digitalizadas. Um Gêmeo Digital é basicamente um ativo corporativo espelhado no mundo virtual, criado inteiramente em software e destinado a inovar os modelos operacionais e de negócios, ao mesmo tempo em que fornece novas oportunidades de geração de receita e valor. Naturalmente, construir sistemas tão exigentes não é uma tarefa fácil. Muitas preocupações organizacionais e tecnológicas devem ser abordadas, como processamento em tempo real, gerenciamento de dados, segurança cibernética e comunicação de baixa latência. Além disso, os Gêmeos Digitais também devem ser extensíveis e oferecer suporte à integração de dados com sua contraparte física, permitindo que as empresas agilizem suas opera ções com mais segurança. Considerando que não existem metodologias ou tecnologias padrão para projetar um Gêmeo Digital, os desenvolvedores de software tem dificuldade em desenvolver uma arquitetura de Gêmeo Digital robusta e confiável que atenda às ne cessidades dos clientes. Além desses desafios técnicos, a indústria de óleo e gás é conhecida por ter processos notoriamente complexos e um ecossistema opressor na adoção de novas tecnologias, prin cipalmente devido ao seu cenário intrincado e em constante mudança. Apesar dessas dificuldades, essa indústria certamente se beneficiaria de um Gêmeo Digital bem proje tado. Assim, este trabalho propõe uma solução de uma arquitetura para um Gêmeo Digital de um Poço de Petróleo. Decidimos usar o processo de garantia de vazão de um poço de petróleo como base para construir um Gêmeo Digital, visando estudar e atender a esses requisitos e limitações técnicas. Implementamos e testamos várias APIs, cada uma repre sentando a versão virtual de um componente real de um poço de petróleo, reforçando a importância de microsserviços e padrões nativos de nuvem em seu design. Ao funciona rem juntos, esses microsserviços compreendem uma solução completa de gêmeos digitais de poços de petróleo, abrangendo os fundamentos básicos de armazenamento, comunica ção e monitoramento. Esta prova de conceito demonstra como um ativo complexo – como um poço de petróleo – pode ser construído e organizado de forma totalmente digital. Por fim, esse design é extensível e abre mais oportunidades de desenvolvimento
    corecore