162,478 research outputs found

    The Survey of the Code Clone Detection Techniques and Process with Types (I, II, III and IV)

    Get PDF
    In software upgradation code clones are regularly utilized. So, we can contemplate on code location strategies goes past introductory code. In condition of-craftsmanship on clone programming study, we perceived the absence of methodical overview. We clarified the earlier research-in view of deliberate and broad database find and the hole of research for additionally think about. Software support cost is more than outlining cost. Code cloning is useful in several areas like detecting library contents, understanding program, detecting malicious program, etc. and apart from pros several serious impact of code cloning on quality, reusability and continuity of software framework. In this paper, we have discussed the code clone and its evolution and classification of code clone. Code clone is classified into 4 types namely Type I, Type II, III and IV. The exact code as well as copied code is depicted in detail for each type of code clone. Several clone detection techniques such as: Text, token, metric, hybrid based techniques were studied comparatively. Comparison of detection tools such as: clone DR, covet, Duploc, CLAN, etc. based on different techniques used are highlighted and cloning process is also explained. Code clones are identical segment of source code which might be inserted intentionally or unintentionally. Reusing code snippets via copying and pasting with or without minor alterations is general task in software development. But the existence of code clones may reduce the design structure and quality of software like changeability, readability and maintainability and hence increase the continuation charges

    Connecting Software Metrics across Versions to Predict Defects

    Full text link
    Accurate software defect prediction could help software practitioners allocate test resources to defect-prone modules effectively and efficiently. In the last decades, much effort has been devoted to build accurate defect prediction models, including developing quality defect predictors and modeling techniques. However, current widely used defect predictors such as code metrics and process metrics could not well describe how software modules change over the project evolution, which we believe is important for defect prediction. In order to deal with this problem, in this paper, we propose to use the Historical Version Sequence of Metrics (HVSM) in continuous software versions as defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN), a popular modeling technique, to take HVSM as the input to build software prediction models. The experimental results show that, in most cases, the proposed HVSM-based RNN model has a significantly better effort-aware ranking effectiveness than the commonly used baseline models

    Development and Assessment of a Movement Disorder Simulator Based on Inertial Data

    Get PDF
    The detection analysis of neurodegenerative diseases by means of low-cost sensors and suitable classification algorithms is a key part of the widely spreading telemedicine techniques. The choice of suitable sensors and the tuning of analysis algorithms require a large amount of data, which could be derived from a large experimental measurement campaign involving voluntary patients. This process requires a prior approval phase for the processing and the use of sensitive data in order to respect patient privacy and ethical aspects. To obtain clearance from an ethics committee, it is necessary to submit a protocol describing tests and wait for approval, which can take place after a typical period of six months. An alternative consists of structuring, implementing, validating, and adopting a software simulator at most for the initial stage of the research. To this end, the paper proposes the development, validation, and usage of a software simulator able to generate movement disorders-related data, for both healthy and pathological conditions, based on raw inertial measurement data, and give tri-axial acceleration and angular velocity as output. To present a possible operating scenario of the developed software, this work focuses on a specific case study, i.e., the Parkinson’s disease-related tremor, one of the main disorders of the homonym pathology. The full framework is reported, from raw data availability to pathological data generation, along with a common machine learning method implementation to evaluate data suitability to be distinguished and classified. Due to the development of a flexible and easy-to-use simulator, the paper also analyses and discusses the data quality, described with typical measurement features, as a metric to allow accurate classification under a low-performance sensing device. The simulator’s validation results show a correlation coefficient greater than 0.94 for angular velocity and 0.93 regarding acceleration data. Classification performance on Parkinson’s disease tremor was greater than 98% in the best test conditions
    corecore