1,064 research outputs found

    International conference on software engineering and knowledge engineering: Session chair

    Get PDF
    The Thirtieth International Conference on Software Engineering and Knowledge Engineering (SEKE 2018) will be held at the Hotel Pullman, San Francisco Bay, USA, from July 1 to July 3, 2018. SEKE2018 will also be dedicated in memory of Professor Lofti Zadeh, a great scholar, pioneer and leader in fuzzy sets theory and soft computing. The conference aims at bringing together experts in software engineering and knowledge engineering to discuss on relevant results in either software engineering or knowledge engineering or both. Special emphasis will be put on the transference of methods between both domains. The theme this year is soft computing in software engineering & knowledge engineering. Submission of papers and demos are both welcome

    WapMetrics: a tool for computing UML design metrics for Web applications

    Get PDF
    Many companies are still asking how to assess and predict the maintenance cost of their software. Measures of software maintenance cost can be taken either late or early in the development process. Early measures of software maintenance cost are beneficial because they can help in allocating project resources efficiently, predicting the effort of maintenance tasks and controlling the maintenance process. This paper describes a tool for computing early metrics from UML class diagrams based on the Web Application Extension (WAE) for UML. A case study is used to show the usefulness and effectiveness of the tool

    Design metrics for web application maintainability measurement

    Get PDF
    Many web applications have evolved from simple HTML pages to complex applications that have a high maintenance cost. This high maintenance cost is due to the heterogeneity of web applications, to fast Internet evolution and the fast- moving market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantitative metrics for predicting web applications maintainability must be used. This paper provides an exploratory study for new design metrics used for measuring the maintainability of web applications from class diagrams. The metrics are based on Web Application Extension (WAE)for UML and will measure the following design attributes: size, complexity, coupling and reusability. In this study the metrics are applied to two web applications from the telecommunications domain

    Error propagation metrics from XMI

    Get PDF
    This work describes the production of an application Error Propagation Metrics from XMI which can extract process and display software design metrics from XMI files. The tool archives these design metrics in a standard XML format defined by a metric document type definition.;XMI is a flavour of XML allowing the description of UML models. As such, the XMI representation of a software design will include information from which a variety of software design metrics can be extracted. These metrics are potentially useful in improving the software design process, either throughout the early stages of design if a suitable XMI-enabled modelling tool is deployed, or to enable the comparison of completed software projects, by extracting design metrics from UML models reverse engineered from the implemented source code.;The tool is able to derive the error propagation of metrics from test XMI files created from UML sequence and state diagrams and from reverse engineered Java source code. However, variation was observed between the XMI representations generated by different software design tools, limiting the ability of the tool to process XMI from all sources. Furthermore, it was noted that subtle differences between UML design representations might have a marked effect on the quality of metrics derived.;In conclusion in order to validate the usefulness of these metrics that can be extracted from XMI files it would be useful to follow well-documented design projects throughout the total design and implementation process. Alternatively, the tool might be used to compare metrics from well-matched design implementations. In either case design metrics will only be of true value to software engineers if they can be associated empirically with a validated measure of system quality

    A novel model for improving the maintainability of web-based systems

    Get PDF
    Web applications incorporate important business assets and offer a convenient way for businesses to promote their services through the internet. Many of these web applica- tions have evolved from simple HTML pages to complex applications that have a high maintenance cost. This is due to the inherent characteristics of web applications, to the fast internet evolution and to the pressing market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantita- tive metrics and models for predicting web applications’ maintainability must be used. Maintainability metrics and models can be useful for predicting maintenance cost, risky components and can help in assessing and choosing between different software artifacts. Since, web applications are different from traditional software systems, models and met- rics for traditional systems can not be applied with confidence to web applications. Web applications have special features such as hypertext structure, dynamic code generation and heterogenousity that can not be captured by traditional and object-oriented metrics. This research explores empirically the relationships between new UML design met- rics based on Conallen’s extension for web applications and maintainability. UML web design metrics are used to gauge whether the maintainability of a system can be im- proved by comparing and correlating the results with different measures of maintain- ability. We studied the relationship between our UML metrics and the following main- tainability measures: Understandability Time (the time spent on understanding the soft- ware artifact in order to complete the questionnaire), Modifiability Time(the time spent on identifying places for modification and making those modifications on the software artifact), LOC (absolute net value of the total number of lines added and deleted for com- ponents in a class diagram), and nRev (total number of revisions for components in a class diagram). Our results gave an indication that there is a possibility for a relationship to exist between our metrics and modifiability time. However, the results did not show statistical significance on the effect of the metrics on understandability time. Our results showed that there is a relationship between our metrics and LOC(Lines of Code). We found that the following metrics NAssoc, NClientScriptsComp, NServerScriptsComp, and CoupEntropy explained the effort measured by LOC(Lines of Code). We found that NC, and CoupEntropy metrics explained the effort measured by nRev(Number of Revi- sions). Our results give a first indication of the usefulness of the UML design metrics, they show that there is a reasonable chance that useful prediction models can be built from early UML design metrics

    Architectural level risk assessment

    Get PDF
    Many companies develop and maintain large-scale software systems for public and financial institutions. Should a failure occur in one of these systems, the impact would be enormous. It is therefore essential, in maintaining a system\u27s quality, to identify any defects early on in the development process in order to prevent the occurrence of failures. However, testing all modules of these systems to identify defects can be very expensive. There is therefore a need for methodologies and tools that support software engineers in identifying the defected and complex software components early on in the development process.;Risk assessment is an essential process for ensuring high quality software products. By performing risk assessment during the early software development phases we can identify complex modules, thus enables us to enhance resource allocation decisions.;To assess the risk of software systems early on in the software\u27s life cycle, we propose an architectural level risk assessment methodology. It uses UML specifications of software systems which are available early on in the software life cycle. It combines the probability of software failures and the severity associated with these failures to estimate software risk factors of software architectural elements (components/connectors), the scenarios, the use cases and systems. As a result, remedial actions to control and improve the quality of the software product can be taken.;We build a risk assessment model which will enable us to identify complex and noncomplex software components. We will be able to estimate programming and service effort, and estimate testing effort. This model will enable us also to identify components with high risk factor which would require the development of effective fault tolerant mechanisms.;To estimate the probability of software failure we introduced and developed a set of dynamic metrics which are used to measure dynamic of software architectural elements from UML static models.;To estimate severity of software failure we propose UML based severity methodology. Also we propose a validation process for both risk and severity methodologies. Finally we propose prototype tool support for the automation of the risk assessment methodology

    A COUPLING AND COHESION METRICS SUITE FOR

    Get PDF
    The increasing need for software quality measurements has led to extensive research into software metrics and the development of software metric tools. To maintain high quality software, developers need to strive for a low-coupled and highly cohesive design. One of many properties considered when measuring coupling and cohesion is the type of relationships that made up coupling and cohesion. What these specific relationships are is widely understood and accepted by researchers and practitioners. However, different researchers base their metrics on a different subset of these relationships. Studies have shown that because of the inclusion of multiple subsets of relationships in one measure of coupling and cohesion metrics, the measures tend to correlate among each other. Validation of these metrics against maintainability index of a Java program suggested that there is high multicollinearity among coupling and cohesion metrics. This research introduces an approach of implementing coupling and cohesion metrics. Every possible relationship is considered and, for each, addressed the issue of whether or not it has significant effect on maintainability index prediction. Validation of orthogonality of the selected metrics is assessed by means of principal component analysis. The investigation suggested that some of the metrics are independent set of metrics, while some are measuring similar dimension

    A comparative analysis of maintainability approaches for web applications

    Get PDF
    Web applications incorporate important business assets and offer a convenient way for businesses to promote their services through the internet. Many of these web applications have evolved from simple HTML pages to complex applications that have high maintenance cost. The high maintenance cost of web applications is due to the inherent characteristics of web applications, to the fast internet evolution and to the pressing market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantitative metrics and models for predicting web applications' maintainability must be used. Since, web applications are different from traditional software systems, models and metrics for traditional systems can not be applied to web applications. The reason for that is that web applications have special features such as hypertext structure, dynamic code generation and heterogenousity that can not be captured by traditional and object-oriented metrics. In this paper, we will provide a comparative analysis of the different approaches for predicting web applications

    A systematic literature review on the code smells datasets and validation mechanisms

    Full text link
    The accuracy reported for code smell-detecting tools varies depending on the dataset used to evaluate the tools. Our survey of 45 existing datasets reveals that the adequacy of a dataset for detecting smells highly depends on relevant properties such as the size, severity level, project types, number of each type of smell, number of smells, and the ratio of smelly to non-smelly samples in the dataset. Most existing datasets support God Class, Long Method, and Feature Envy while six smells in Fowler and Beck's catalog are not supported by any datasets. We conclude that existing datasets suffer from imbalanced samples, lack of supporting severity level, and restriction to Java language.Comment: 34 pages, 10 figures, 12 tables, Accepte
    • …
    corecore