273 research outputs found

    Behind the Intents: An In-depth Empirical Study on Software Refactoring in Modern Code Review

    Get PDF
    Code refactorings are of pivotal importance in modern code review. Developers may preserve, revisit, add or undo refactorings through changes’ revisions. Their goal is to certify that the driving intent of a code change is properly achieved. Developers’ intents behind refactorings may vary from pure structural improvement to facilitating feature additions and bug fixes. However, there is little understanding of the refactoring practices performed by developers during the code review process. It is also unclear whether the developers’ intents influence the selection, composition, and evolution of refactorings during the review of a code change. Through mining 1,780 reviewed code changes from 6 systems pertaining to two large open-source communities, we report the first in-depth empirical study on software refactoring during code review. We inspected and classified the developers’ intents behind each code change into 7 distinct categories. By analyzing data generated during the complete reviewing process, we observe: (i) how refactorings are selected, composed and evolved throughout each code change, and (ii) how developers’ intents are related to these decisions. For instance, our analysis shows developers regularly apply non-trivial sequences of refactorings that crosscut multiple code elements (i.e., widely scattered in the program) to support a single feature addition. Moreover, we observed that new developers’ intents commonly emerge during the code review process, influencing how developers select and compose their refactorings to achieve the new and adapted goals. Finally, we provide an enriched dataset that allows researchers to investigate the context and motivations behind refactoring operations during the code review process

    Zsmell – Code Smell Detection for Open Source Software

    Get PDF
    Today, open-source software (OSS) is used in various applications. It has played a vital role in information systems of many user groups such as commercials, research, education, public health, and tourism. It is also a source of additional knowledge for collaborators because this type of software is easily accessible through websites that provide management of version control services such as GitHub. However, a recent study shows an increasing trend in the existence of code smells. In OSS, there is a growing number of code smells that cause software errors. Having a code smell in software is a serious issue since it impacts the software in terms of deployment, maintenance as well as user confidence toward the software. Finding code smells in the early stages of software development would provide for better software maintenance and reliability; thus, researchers invented the Zsmell software system that helps search for code smells in the source code saved in GitHub. Developed systems display data related to code smells in each source code version that was modified by collaborators. Thus, the developers will be able to employ the proper refactoring method, which is a change in the internal structure of software without changing the original functionality of the software. We believe that this system will enable open source collaborators to improve the quality of their OSS, especially on code smell reduction and the understanding of various types of code smell commonly found in OSS projects

    Can we avoid high coupling?

    Get PDF
    It is considered good software design practice to organize source code into modules and to favour within-module connections (cohesion) over between-module connections (coupling), leading to the oft-repeated maxim "low coupling/high cohesion". Prior research into network theory and its application to software systems has found evidence that many important properties in real software systems exhibit approximately scale-free structure, including coupling; researchers have claimed that such scale-free structures are ubiquitous. This implies that high coupling must be unavoidable, statistically speaking, apparently contradicting standard ideas about software structure. We present a model that leads to the simple predictions that approximately scale-free structures ought to arise both for between-module connectivity and overall connectivity, and not as the result of poor design or optimization shortcuts. These predictions are borne out by our large-scale empirical study. Hence we conclude that high coupling is not avoidable--and that this is in fact quite reasonable

    Programming Patterns in Computer Games Course

    Get PDF
    Selles lĂ”putöös on kirjeldatud uue kursuse "Programmeerimismustrid arvutimĂ€ngudes" (MTAT.03.315) loomist. Antud kursus Ă”petab tudengitele disainimustrite rakendamist korduvate probleemide lahendamiseks arvutimĂ€ngude arendamisel. LĂ”putöö kĂ€igus koostati materjalid, ĂŒlesanded, hindamisjuhised ning eksam. Katsetati ka kogemuspĂ”hise Ă”ppemeetodi rakendamist disainimustrite Ă”petamisel lĂ€bi ĂŒlesannete. Kursust viidi lĂ€bi ĂŒks kord ning koguti tudengitelt tagasisidet. LĂ”petuseks esitatakse kogutud tagasiside ning pakutakse vĂ€lja parendusi.This thesis describes the creation of a new course Programming Patterns in Computer Games (MTAT.03.315). The course covers the application of design patterns to solve recurring problems in game development. Course materials, tasks, grading criteria and an exam were designed as part of the thesis work. Experiential learning technique for teaching of the design patterns was explored through programming tasks. The course was conducted once and feedback from students was collected. Finally the feedback is analyzed and future improvements are proposed

    Revisiting the refactoring names.

    Get PDF
    Refactoring Ă© uma prĂĄtica chave em metodologias ĂĄgeis utilizadas por vĂĄrios desenvolvedores e disponĂ­vel em IDEs profissionais. Existem livros e artigos que explicam os refactorings e analisam problemas relacionados aos nomes. Alguns trabalhos identificaram que os nomes de refactorings em ferramentas automatizadas de refactoring podem confundir os desenvolvedores. No entanto, nĂŁo sabemos atĂ© que ponto os nomes dos refactorings sĂŁo confusos no contexto de transformaçÔes de pequena granularidade. Neste trabalho, conduzimos um estudo de mĂ©todo misto a partir de diferentes perspectivas para entender melhor o significado dos nomes dos refactorings para desenvolvedores e desenvolvedores de ferramentas (implementaçÔes de refactorings e ferramentas de detecção de refactorings). No primeiro estudo, revisitamos os nomes dos refactorings atravĂ©s de uma pesquisa com 107 desenvolvedores de projetos Java populares no GitHub. Perguntamos a eles sobre o resultado de sete tipos de refatoração aplicados a pequenos programas. Esse estudo identifica que os desenvolvedores nĂŁo esperam a mesma saĂ­da para todas as perguntas, mesmo usando pequenos programas Java como entrada. O significado dos nomes dos refactorings Ă© baseado na experiĂȘncia dos desenvolvedores para um nĂșmero deles (71.02%). No segundo estudo, observamos atĂ© que ponto as implementaçÔes de refatoração tĂȘm o mesmo significado dos nomes dos refactorings. Aplicamos 10 tipos de refactorings em 157,339 programas usando 27 implementaçÔes de refactorings de trĂȘs ferramentas, usando a mesma entrada e parĂąmetros, e comparando as saĂ­das. Categorizamos as diferenças em 17 tipos que ocorrem em 9 de 10 tipos de refactorings implementados por Eclipse, NetBeans e JRRT. No terceiro estudo, comparamos o significado dos nomes dos refactorings usados em uma ferramenta (RMiner) que detecta refactorings com implementaçÔes de refactorings implementadas por trĂȘs ferramentas. RMiner nĂŁo produz o mesmo conjunto de refactorings aplicados pelas implementaçÔes do Eclipse, NetBeans e JRRT em 48.57%, 35% e 9.22% dos casos, respectivamente. Em geral, desenvolvedores e desenvolvedores de ferramentas usam diferentes significados para os nomes dos refactorings, e isso pode afetar a comunicação entre desenvolvedores e pesquisadores.Refactoring is a key practice in agile methodologies used by a number of developers, and available in professional IDEs. There are some books and papers explaining the refactoring names. Some works identified that the names of some automated refactoring tools are a distraction to developers. However, we do not know to what extent the refactoring names are confusing in the context of small-grained transformations. In this work, we conduct a mixedmethod study from different perspectives to better understand the meaning of refactoring names for developers, and tool developers (refactoring implementations, and refactoring detection tools). In the first study, we revisit the refactoring names by conducting a survey with 107 developers of popular Java projects on GitHub. We asked them about the output of seven refactoring types applied to small programs. It finds that developers do not expect the same output to all questions, even using small Java programs as input. The meaning of refactoring names is based on developers’ experience for a number of them (71.02%). In the second study, we observe to what extent refactoring implementations have the same meaning of the refactoring names. We apply 10 types of refactorings to 157,339 programs using 27 refactoring implementations from three tools using the same input and parameters, and compare the outputs. We categorize the differences into 17 types that occur in 9 out of 10 refactoring types implemented by Eclipse, NetBeans, and JRRT. In the third study, we compare the meaning of the refactoring names used in a tool (RMiner) that detects refactorings to refactoring implementations implemented by three tools. RMiner does not yield the same set of refactorings applied by implementations from Eclipse, NetBeans, and JRRT in 48.57%, 35%, and 9.22% of the cases, respectively. Overall, developers and tool developers use different meanings for refactoring names, and this may impact developers’ and researchers’ communication.Cape

    Improving modularity of interactive software with the MDPC Architecture

    Get PDF
    International audienceThe "Model - Display view - Picking view - Controller" model is a refinement of the MVC architecture. It introduces the "Picking View" component, which offloads the need from the controller to analytically compute the picked element. We describe how using the MPDC architecture leads to benefits in terms of modularity and descriptive ability when implementing interactive components. We report on the use of the MDPC architecture in a real application : we effectively measured gains in controller code, which is simpler and more focused

    Lively3D: building a 3D desktop environment as a single page application

    Get PDF
    The Web has rapidly evolved from a simple document browsing and distribution environment into a rich software platform, where desktop-style applications are treated as first class citizens. Despite the associated technical complexities and limitations, it is not unusual to find complex applications that build on the web as their only platform, with no traditional installable application for the desktop environment - such systems are simply accessed via a web page that is downloaded inside the browser and once loading is completed, the application will begin its execution immediately. With the recent standardization efforts, including HTML5 and WebGL in particular, compelling, visually rich applications are increasingly supported by the the browsers. In this paper, we demonstrate the new facilities of the browser as a visualization tool, going beyond what is expected of traditional web applications. In particular, we demonstrate that with mashup technologies, which enable combining already existing content from various sites into an integrated experience, the new graphics facilities unleashes unforeseen potential for visualizations

    A holistic method for improving software product and process quality

    Get PDF
    The concept of quality in general is elusive, multi-faceted and is perceived differently by different stakeholders. Quality is difficult to define and extremely difficult to measure. Deficient software systems regularly result in failures which often lead to significant financial losses but more importantly to loss of human lives. Such systems need to be either scrapped and replaced by new ones or corrected/improved through maintenance. One of the most serious challenges is how to deal with legacy systems which, even when not failing, inevitably require upgrades, maintenance and improvement because of malfunctioning or changing requirements, or because of changing technologies, languages, or platforms. In such cases, the dilemma is whether to develop solutions from scratch or to re-engineer a legacy system. This research addresses this dilemma and seeks to establish a rigorous method for the derivation of indicators which, together with management criteria, can help decide whether restructuring of legacy systems is advisable. At the same time as the software engineering community has been moving from corrective methods to preventive methods, concentrating not only on both product quality improvement and process quality improvement has become imperative. This research investigation combines Product Quality Improvement, primarily through the re-engineering of legacy systems; and Process Improvement methods, models and practices, and uses a holistic approach to study the interplay of Product and Process Improvement. The re-engineering factor rho, a composite metric was proposed and validated. The design and execution of formal experiments tested hypotheses on the relationship of internal (code-based) and external (behavioural) metrics. In addition to proving the hypotheses, the insights gained on logistics challenges resulted in the development of a framework for the design and execution of controlled experiments in Software Engineering. The next part of the research resulted in the development of the novel, generic and, hence, customisable Quality Model GEQUAMO, which observes the principle of orthogonality, and combines a top-down analysis of the identification, classification and visualisation of software quality characteristics, and a bottom-up method for measurement and evaluation. GEQUAMO II addressed weaknesses that were identified during various GEQUAMO implementations and expert validation by academics and practitioners. Further work on Process Improvement investigated the Process Maturity and its relationship to Knowledge Sharing, resulted in the development of the I5P Visualisation Framework for Performance Estimation through the Alignment of Process Maturity and Knowledge Sharing. I5P was used in industry and was validated by experts from academia and industry. Using the principles that guided the creation of the GEQUAMO model, the CoFeD visualisation framework, was developed for comparative quality evaluation and selection of methods, tools, models and other software artifacts. CoFeD is very useful as the selection of wrong methods, tools or even personnel is detrimental to the survival and success of projects and organisations, and even to individuals. Finally, throughout the many years of research and teaching Software Engineering, Information Systems, Methodologies, I observed the ambiguities of terminology and the use of one term to mean different concepts and one concept to be expressed in different terms. These practices result in lack of clarity. Thus my final contribution comes in my reflections on terminology disambiguation for the achievement of clarity, and the development of a framework for achieving disambiguation of terms as a necessary step towards gaining maturity and justifying the use of the term “Engineering” 50 years since the term Software Engineering was coined. This research resulted in the creation of new knowledge in the form of novel indicators, models and frameworks which can aid quantification and decision making primarily on re-engineering of legacy code and on the management of process and its improvement. The thesis also contributes to the broader debate and understanding of problems relating to Software Quality, and establishes the need for a holistic approach to software quality improvement from both the product and the process perspectives
    • 

    corecore