2,357 research outputs found

    Fine-grained Metrics of Cohesion Lack for Service Interfaces

    Get PDF
    International audienceA design issue that often appears in real-world services is that their interfaces are not cohesive, i.e., they consist of many and possibly unrelated operations. This issue may complicate the comprehension of the services functionalities and the maintenance of the applications that use them. Currently, the state of the art on cohesion metrics for service interfaces is limited. In particular, there exist coarse-grained metrics of cohesion lack, which consider that the operations of a service interface are related if the types of certain of their input/output data exactly match. The problem in this approach is that operations which operate on data characterized by similar, but not exactly matching, types are treated as being totaly unrelated. Consequently, the aforementioned metrics may overestimate the cohesion lack of service interfaces. In this paper, we undertake a more elaborate approach. Specifically, we propose two fine-grained metrics of cohesion lack, which are defined with respect to the structural similarity of the input/output data types of interface operations. The proposed metrics are formally defined and analytically assessed with respect to fundamental properties of software metrics. Moreover, the usefulness of the metrics in identifying cohesion problems is evaluated in real-world services

    Dimensionality Reduction of Quality Objectives for Web Services Design Modularization

    Full text link
    With the increasing use of service-oriented Architecture (SOA) in new software development, there is a growing and urgent need to improve current practice in service-oriented design. To improve the design of Web services, the search for Web services interface modularization solutions deals, in general, with a large set of conflicting quality metrics. Deciding about which and how the quality metrics are used to evaluate generated solutions are always left to the designer. Some of these objectives could be correlated or conflicting. In this paper, we propose a dimensionality reduction approach based on Non-dominated Sorting Genetic Algorithm (NSGA-II) to address the Web services re-modularization problem. Our approach aims at finding the best-reduced set of objectives (e.g. quality metrics) that can generate near optimal Web services modularization solutions to fix quality issues in Web services interface. The algorithm starts with a large number of interface design quality metrics as objectives (e.g. coupling, cohesion, number of ports, number of port types, and number of antipatterns) that are reduced based on the nonlinear correlation information entropy (NCIE).The statistical analysis of our results, based on a set of 22 real world Web services provided by Amazon and Yahoo, confirms that our dimensionality reduction Web services interface modularization approach reduced significantly the number of objectives on several case studies to a minimum of 2 objectives and performed significantly better than the state-of-the-art modularization techniques in terms of generating well-designed Web services interface for users.Master of ScienceSoftware Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/145687/1/Thesis Report_Hussein Skaf.pdfDescription of Thesis Report_Hussein Skaf.pdf : Thesi

    Intelligent Web Services Architecture Evolution Via An Automated Learning-Based Refactoring Framework

    Full text link
    Architecture degradation can have fundamental impact on software quality and productivity, resulting in inability to support new features, increasing technical debt and leading to significant losses. While code-level refactoring is widely-studied and well supported by tools, architecture-level refactorings, such as repackaging to group related features into one component, or retrofitting files into patterns, remain to be expensive and risky. Serval domains, such as Web services, heavily depend on complex architectures to design and implement interface-level operations, provided by several companies such as FedEx, eBay, Google, Yahoo and PayPal, to the end-users. The objectives of this work are: (1) to advance our ability to support complex architecture refactoring by explicitly defining Web service anti-patterns at various levels of abstraction, (2) to enable complex refactorings by learning from user feedback and creating reusable/personalized refactoring strategies to augment intelligent designers’ interaction that will guide low-level refactoring automation with high-level abstractions, and (3) to enable intelligent architecture evolution by detecting, quantifying, prioritizing, fixing and predicting design technical debts. We proposed various approaches and tools based on intelligent computational search techniques for (a) predicting and detecting multi-level Web services antipatterns, (b) creating an interactive refactoring framework that integrates refactoring path recommendation, design-level human abstraction, and code-level refactoring automation with user feedback using interactive mutli-objective search, and (c) automatically learning reusable and personalized refactoring strategies for Web services by abstracting recurring refactoring patterns from Web service releases. Based on empirical validations performed on both large open source and industrial services from multiple providers (eBay, Amazon, FedEx and Yahoo), we found that the proposed approaches advance our understanding of the correlation and mutual impact between service antipatterns at different levels, revealing when, where and how architecture-level anti-patterns the quality of services. The interactive refactoring framework enables, based on several controlled experiments, human-based, domain-specific abstraction and high-level design to guide automated code-level atomic refactoring steps for services decompositions. The reusable refactoring strategy packages recurring refactoring activities into automatable units, improving refactoring path recommendation and further reducing time-consuming and error-prone human intervention.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/142810/1/Wang Final Dissertation.pdfDescription of Wang Final Dissertation.pdf : Dissertatio

    Collecting Service-Based Maintainability Metrics from RESTful API Descriptions: Static Analysis and Threshold Derivation

    Full text link
    While many maintainability metrics have been explicitly designed for service-based systems, tool-supported approaches to automatically collect these metrics are lacking. Especially in the context of microservices, decentralization and technological heterogeneity may pose challenges for static analysis. We therefore propose the modular and extensible RAMA approach (RESTful API Metric Analyzer) to calculate such metrics from machine-readable interface descriptions of RESTful services. We also provide prototypical tool support, the RAMA CLI, which currently parses the formats OpenAPI, RAML, and WADL and calculates 10 structural service-based metrics proposed in scientific literature. To make RAMA measurement results more actionable, we additionally designed a repeatable benchmark for quartile-based threshold ranges (green, yellow, orange, red). In an exemplary run, we derived thresholds for all RAMA CLI metrics from the interface descriptions of 1,737 publicly available RESTful APIs. Researchers and practitioners can use RAMA to evaluate the maintainability of RESTful services or to support the empirical evaluation of new service interface metrics.Comment: Accepted at CSE/QUDOS workshop (collocated with ECSA 2020

    Assessing architectural evolution: A case study

    Get PDF
    This is the post-print version of the Article. The official published can be accessed from the link below - Copyright @ 2011 SpringerThis paper proposes to use a historical perspective on generic laws, principles, and guidelines, like Lehman’s software evolution laws and Martin’s design principles, in order to achieve a multi-faceted process and structural assessment of a system’s architectural evolution. We present a simple structural model with associated historical metrics and visualizations that could form part of an architect’s dashboard. We perform such an assessment for the Eclipse SDK, as a case study of a large, complex, and long-lived system for which sustained effective architectural evolution is paramount. The twofold aim of checking generic principles on a well-know system is, on the one hand, to see whether there are certain lessons that could be learned for best practice of architectural evolution, and on the other hand to get more insights about the applicability of such principles. We find that while the Eclipse SDK does follow several of the laws and principles, there are some deviations, and we discuss areas of architectural improvement and limitations of the assessment approach

    Package Fingerprint: a visual summary of package interfaces and relationships

    Get PDF
    International audienceContext: Object-oriented languages such as Java, Smalltalk, and C++ structure their programs using packages. Maintainers of large systems need to understand how packages relate to each other, but this task is complex because packages often have multiple clients and play different roles (class container, code ownership. . . ). Several approaches have been proposed, among which the use of cohesion and coupling metrics. Such metrics help identify candidate packages for restructuring; however, they do not help maintainers actually understand the structure and interrelation- ships between packages. Objectives: In this paper, we use pre-attentive processing as the basis for package visualization and see to what extent it could be used in package understanding. Method: We present the package fingerprint, a 2D visualization of the references made to and from a package. The proposed visualization offers a semantically rich, but compact and zoomable views centered on packages. We focus on two views (incoming and outgoing references) that help users understand how the package under analysis is used by the system and how it uses the system. Results: We applied these views on four large systems: Squeak, JBoss, Azureus, and ArgoUML. We obtained several interesting results, among which, the identification of a set of recurring visual patterns that help maintainers: (a) more easily identify the role of and the way a package is used within the system (e.g., the package under analysis provides a set of layered services), and, (b) detect either problematic situations (e.g., a single package that groups together a large number of basic services) or opportunities for better package restructuring (e.g., removing cyclic dependencies among packages). The visualization generally scaled well and the detection of different patterns was always possible. Conclusion: The proposed visualizations and patterns proved to be useful in understanding and maintaining the different systems we addressed. To generalize to other contexts and systems, a real user study is required

    A Bi-Level Multi-Objective Approach for Web Service Design Defects Detection

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/152453/1/JSS_WSBi_Level__Copy_fv.pd

    Web service QoS prediction using improved software source code metrics

    Get PDF
    Due to the popularity of Web-based applications, various developers have provided an abundance of Web services with similar functionality. Such similarity makes it challenging for users to discover, select, and recommend appropriate Web services for the service-oriented systems. Quality of Service (QoS) has become a vital criterion for service discovery, selection, and recommendation. Unfortunately, service registries cannot ensure the validity of the available quality values of the Web services provided online. Consequently, predicting the Web services' QoS values has become a vital way to find the most appropriate services. In this paper, we propose a novel methodology for predicting Web service QoS using source code metrics. The core component is aggregating software metrics using inequality distribution from micro level of individual class to the macro level of the entire Web service. We used correlation between QoS and software metrics to train the learning machine. We validate and evaluate our approach using three sets of software quality metrics. Our results show that the proposed methodology can help improve the efficiency for the prediction of QoS properties using its source code metrics

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
    • 

    corecore