28 research outputs found

    Hatékony rendszer-szintű hatásanalízis módszerek és alkalmazásuk a szoftverfejlesztés folyamatában = Efficient whole-system impact analysis methods with applications in software development

    Get PDF
    Szoftver hatásanalízis során a rendszer megváltoztatásának következményeit becsüljük, melynek fontos alkalmazásai vannak például a változtatás-propagálás, költségbecslés, szoftverminőség és tesztelés területén. A kutatás során olyan hatásanalízis módszereket dolgoztunk ki, melyek hatékonyan és sikeresen alkalmazhatók nagyméretű és heterogén architektúrájú, valós alkalmazások esetében is. A korábban rendelkezésre álló módszerek csak korlátozott méretben és környezetekben voltak képesek eredményt szolgáltatni. A meglévő statikus és dinamikus programszeletelés és függőség elemzési algoritmusok továbbfejlesztése mellett számos kapcsolódó területen értünk el eredményeket úgy, mint függőségek metrikákkal történő vizsgálata, fogalmi csatolás kutatása, minőségi modellek, hiba- és produktivitás előrejelzés. Ezen területeknek a módszerek gyakorlatban történő alkalmazásában van jelentősége. Speciális technológiákra koncentrálva újszerű eredmények születtek, például adatbázis rendszerek vagy alacsony szintű nyelvek esetében. A hatásanalízis módszerek alkalmazásai terén kidolgoztunk újszerű módszereket a tesztelés optimalizálása, teszt lefedettség mérés, -priorizálás és változás propagálás területeken. A kidolgozott módszerek alapját képezték további projekteknek, melyek során szoftvertermékeket is kiegészítettek módszereink alapján. | During software change impact analysis, we assess the consequences of changes made to a software system, which has important applications in, for instance, change propagation, cost estimation, software quality and testing. We developed impact analysis methods that can be effectively and efficiently used for large and heterogeneous real life applications as well. Previously available methods could provide results only in limited environments and for systems of limited size. Apart from the enhancements developed for the existing static and dynamic slicing and dependence analysis algorithms, we achieved results in different related areas such as investigation of dependences based on metrics, conceptual coupling, quality models and prediction of defects and productivity. These areas mostly support the application of the methods in practice. We have contributions in the fields of different special technologies, for instance, dependences in database systems or analysis of low level languages. Regarding the applications of impact analysis, we developed novel methods for test optimization, test coverage measurement and prioritization, and change propagation. The developed methods provided basis for further projects, also for extension of certain software products

    Analyzing the effects of formal methods on the development of industrial control software

    Full text link

    Connection between version control operations and quality change of the source code

    Get PDF
    Software erosion is a well-known phenomena, meaning that software quality is continuously decreasing due to the ever-ongoing modifications in the source code. In this research work we investigated this phenomena by studying the impact of version control commit operations (add, update, delete) on the quality of the code. We calculated the ISO/IEC 9126 quality attributes for thousands of revisions of an industrial and three open-source software systems with the help of the Columbus Quality Model. We also collected the cardinality of each version control operation type for every investigated revision. We performed Chisquared tests on contingency tables with rows of quality change and columns of version control operation commit types. We compared the results with random data as well. We identified that the relationship between the version control operations and quality change is quite strong. Great maintainability improvements are mostly caused by commits containing Add operation. Commits containing file updates only tend to have a negative impact on the quality. Deletions have a weak connection with quality, and we could not formulate a general statement

    Requirements Traceability: Recovering and Visualizing Traceability Links Between Requirements and Source Code of Object-oriented Software Systems

    Full text link
    Requirements traceability is an important activity to reach an effective requirements management method in the requirements engineering. Requirement-to-Code Traceability Links (RtC-TLs) shape the relations between requirement and source code artifacts. RtC-TLs can assist engineers to know which parts of software code implement a specific requirement. In addition, these links can assist engineers to keep a correct mental model of software, and decreasing the risk of code quality degradation when requirements change with time mainly in large sized and complex software. However, manually recovering and preserving of these TLs puts an additional burden on engineers and is error-prone, tedious, and costly task. This paper introduces YamenTrace, an automatic approach and implementation to recover and visualize RtC-TLs in Object-Oriented software based on Latent Semantic Indexing (LSI) and Formal Concept Analysis (FCA). The originality of YamenTrace is that it exploits all code identifier names, comments, and relations in TLs recovery process. YamenTrace uses LSI to find textual similarity across software code and requirements. While FCA employs to cluster similar code and requirements together. Furthermore, YamenTrace gives a visualization of recovered TLs. To validate YamenTrace, it applied on three case studies. The findings of this evaluation prove the importance and performance of YamenTrace proposal as most of RtC-TLs were correctly recovered and visualized.Comment: 17 pages, 14 figure

    Maintainability of classes in terms of bug prediction

    Get PDF
    Measuring software product maintainability is a central issue in software engineering which led to a number of different practical quality models. Besides system level assessments it is also desirable that these models provide technical quality information at source code element level (e.g. classes, methods) to aid the improvement of the software. Although many existing models give an ordered list of source code elements that should be improved, it is unclear how these elements are affected by other important quality indicators of the system, e.g. bug density. In this paper we empirically investigate the bug prediction capabilities of the class level maintainability measures of our ColumbusQM probabilistic quality model using open-access PROMSIE bug dataset. We show that in terms of correctness and completeness, ColumbusQM competes with statistical and machine learning prediction models especially trained on the bug data using product metrics as predictors. This is a great achievement in the light of that our model needs no training and its purpose is different (e.g. to estimate testability, or development costs) than those of the bug prediction models

    srcSlice: very efficient and scalable forward static slicing

    Full text link
    A highly efficient lightweight forward static slicing approach is presented and evaluated. The approach does not compute the program/system dependence graph but instead dependence and control information is com-puted as needed while computing the slice on a variable. The result is a list of line numbers, dependent vari-ables, aliases, and function calls that are part of the slice for all variables (both local and global) for the entire system. The method is implemented as a tool, called srcSlice, on top of srcML, an XML representation of source code. The approach is highly scalable and can generate the slices for all variables of the Linux kernel in approximately 20min on a typical desktop. Benchmark results are compared with the CodeSurfer slicing tool from GrammaTech Inc., and the approach compares well with regard to accuracy of slices. Copyright
    corecore