47 research outputs found

    A Tool for Visual Understanding of Source Code Dependencies

    Full text link
    Many program comprehension tools use graphs to visualize and analyze source code. The main issue is that existing approaches create graphs overloaded with too much information. Graphs contain hundreds of nodes and even more edges that cross each other. Understanding these graphs and using them for a given program comprehension task is tedious, and in the worst case developers stop using the tools. In this paper we present DA4Java, a graphbased approach for visualizing and analyzing static dependencies between Java source code entities. The main contribution of DA4Java is a set of features to incrementally compose graphs and remove irrelevant nodes and edges from graphs. This leads to graphs that contain significantly fewer nodes and edges and need less effort to understand

    RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for Evaluating Robotics Computing System Performance

    Full text link
    We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to evaluate robotics computing performance across a diverse range of hardware platforms using ROS 2 as its common baseline. The suite encompasses ROS 2 packages covering the full robotics pipeline and integrates two distinct benchmarking approaches: black-box testing, which measures performance by eliminating upper layers and replacing them with a test application, and grey-box testing, an application-specific measure that observes internal system states with minimal interference. Our benchmarking framework provides ready-to-use tools and is easily adaptable for the assessment of custom ROS 2 computational graphs. Drawing from the knowledge of leading robot architects and system architecture experts, RobotPerf establishes a standardized approach to robotics benchmarking. As an open-source initiative, RobotPerf remains committed to evolving with community input to advance the future of hardware-accelerated robotics

    The Pervasiveness of Global Data in Evolving Software Systems

    Full text link
    Abstract. In this research, we investigate the role of common coupling in evolving software systems. It can be argued that most software de-velopers understand that the use of global data has many harmful side-effects, and thus should be avoided. We are therefore interested in the answer to the following question: if global data does exist within a soft-ware project, how does global data usage evolve over a software project’s lifetime? Perhaps the constant refactoring and perfective maintenance eliminates global data usage, or conversely, perhaps the constant addi-tion of features and rapid development introduce an increasing reliance on global data? We are also interested in identifying if global data usage patterns are useful as a software metric that is indicative of an interesting or significant event in the software’s lifetime. The focus of this research is twofold: first to develop an effective and automatic technique for studying global data usage over the lifetime of large software systems and secondly, to leverage this technique in a case-study of global data usage for several large and evolving software systems in an effort to reach answers to these questions.

    Predicting the fix time of bugs

    Full text link
    Two important questions concerning the coordination of development effort are which bugs to fix first and how long it takes to fix them. In this paper we investigate empirically the relationships between bug report attributes and the time to fix. The objective is to compute prediction models that can be used to recommend whether a new bug should and will be fixed fast or will take more time for resolution. We examine in detail if attributes of a bug report can be used to build such a recommender system. We use decision tree analysis to compute and 10-fold cross validation to test prediction models. We explore prediction models in a series of empirical studies with bug report data of six systems of the three open source projects Eclipse, Mozilla, and Gnome. Results show that our models perform significantly better than random classification. For example, fast fixed Eclipse Platform bugs were classified correctly with a precision of 0.654 and a recall of 0.692. We also show that the inclusion of postsubmission bug report data of up to one month can further improve prediction models

    Using Source Code Metrics to Predict Change-Prone Java Interfaces

    No full text
    Recent empirical studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code files and classes. While results showed strong correlations and good predictive power of these metrics, they do not distinguish between interface, abstract or concrete classes. In particular, interfaces declare contracts that are meant to remain stable during the evolution of a software system while the implementation in concrete classes is more likely to change. This paper aims at investigating to which extent the existing source code metrics can be used for predicting change-prone Java interfaces. We empirically investigate the correlation between metrics and the number of fine-grained source code changes in interfaces of ten Java open-source systems. Then, we evaluate the metrics to calculate models for predicting change-prone Java interfaces. Our results show that the external interface cohesion metric exhibits the strongest correlation with the number of source code changes. This metric also improves the performance of prediction models to classify Java interfaces into change-prone and not change-prone. Accepted for publication in the Proceedings of the International Conference on Software Maintenance, 2011, IEEE CS Press.Software TechnologyElectrical Engineering, Mathematics and Computer Scienc

    Can developer-module networks predict failures?

    No full text
    Software teams should follow a well defined goal and keep their work focused. Work fragmentation is bad for efficiency and quality. In this paper we empirically investigate the relationship between the fragmentation of developer contributions and the number of post-release failures. Our approach is to represent developer contributions with a developer-module network that we call contribution network. We use network centrality measures to measure the degree of fragmentation of developer contributions. Fragmentation is determined by the centrality of software modules in the contribution network. Our claim is that central software modules are more likely to be failure-prone than modules located in surrounding areas of the network. We analyze this hypothesis by exploring the network centrality of Microsoft Windows Vista binaries using several network centrality measures as well as linear and logistic regression analysis. In particular, we investigate which centrality measures are significant to predict the probability and number of post-release failures. Results of our experiments show that central modules are more failure-prone than modules located in surrounding areas of the network. Results further confirm that number of authors and number of commits are significant predictors for the probability of post-release failures. For predicting the number of post-release failures the closeness centrality measure is most significant

    A Genetic Algorithm to Find the Adequate Granularity for Service Interfaces

    No full text
    The relevance of the service interfaces’ granularity and its architectural impact have been widely investigated in literature. Existing studies show that the granularity of a service interface, in terms of exposed operations, should reflect their clients’ usage. This idea has been formalized in the Consumer-Driven Contracts pattern (CDC). However, to the best of our knowledge, no studies propose techniques to assist providers in finding the right granularity and in easing the adoption of the CDC pattern. In this paper, we propose a genetic algorithm that mines the clients’ usage and suggests Fac¾ade services whose granularity reflect the usage of each different type of clients. These services can be deployed on top of the original service and they become contracts for the different types of clients satisfying the CDC pattern. A first study shows that the genetic algorithm is capable of finding Facade services and it outperforms a random search approachSoftware TechnologyElectrical Engineering, Mathematics and Computer Scienc

    The status of Italian intellectuals in the cinema: Eugenio Montale exemplary case

    Get PDF
    Preprint of paper published in: WCRE 2012 - Proceedings of the 19th Working Conference on Reverse Engineering, 15-18 October 2012; doi:10.1109/WCRE.2012.53 Antipatterns are poor solutions to design and implementation problems which are claimed to make object oriented systems hard to maintain. Our recent studies showed that classes with antipatterns change more frequently than classes without antipatterns. In this paper, we detail these analyses by taking into account fine-grained source code changes (SCC) extracted from 16 Java open source systems. In particular we investigate: whether classes with antipatterns are more change-prone (in terms of SCC) than classes without; (2) whether the type of antipattern impacts the change-proneness of Java classes; and (3) whether certain types of changes are performed more frequently in classes affected by a certain antipattern. Our results show that: 1) the number of SCC performed in classes affected by antipatterns is statistically greater than the number of SCC performed in classes with no antipattern; 2) classes participating in the three antipatterns ComplexClass, SpaghettiCode, and SwissArmyKnife are more change-prone than classes affected by other antipatterns; and 3) certain types of changes are more likely to be performed in classes affected by certain antipatterns, such as API changes are likely to be performed in classes affected by the ComplexClass, SpaghettiCode, and SwissArmyKnife antipatterns.Software Computer TechnologyElectrical Engineering, Mathematics and Computer Scienc

    An Exploratory Study of the Pull-based Software Development Model

    No full text
    The advent of distributed version control systems has led to the development of a new paradigm for distributed software development; instead of pushing changes to a central repository, developers pull them from other repositories and merge them locally. Various code hosting sites, notably Github, have tapped on the opportunity to facilitate pull-based development by offering workflow support tools, such as code reviewing systems and integrated issue trackers. In this work, we explore how pull-based software development works, first on the GHTorrent corpus and then on a carefully selected sample of 291 projects. We find that the pull request model offers fast turnaround, increased opportunities for community engagement and decreased time to incorporate contributions. We show that a relatively small number of factors affect both the decision to merge a pull request and the time to process it. We also examine the reasons for pull request rejection and find that technical ones are only a small minority

    Towards a weighted voting system for Q&A sites

    No full text
    Version: Accepted for publication in the Proceedings of the International Conference on Software Maintenance (ICSM), 2013, IEEE Computer Society. Doi: http://dx.doi.org/10.1109/ICSM.2013.49 Q&A sites have become popular to share and look for valuable knowledge. Users can easily and quickly access high quality answers to common questions. The main mechanism to label good answers is to count the votes per answer. This mechanism, however, does not consider whether other answers were present at the time when a vote is given. Consequently, good answers that were given later are likely to receive less votes than they would have received if given earlier. In this paper we present a Weighted Votes (WV) metric that gives different weights to the votes depending on how many answers were present when the vote is performed. The idea behind WV is to emphasize the answer that receives most of the votes when most of the answers were already posted. Mining the Stack Overflow data dump we show that the WV metric is able to highlight between 4.07% and 10.82% answers that differ from the most voted ones.Software TechnologyElectrical Engineering, Mathematics and Computer Scienc
    corecore