27 research outputs found

    Detecting Similar Applications with Collaborative Tagging

    Get PDF
    Abstract—Detecting similar applications are useful for var-ious purposes ranging from program comprehension, rapid prototyping, plagiarism detection, and many more. McMillan et al. have proposed a solution to detect similar applications based on common Java API usage patterns. Recently, collaborative tagging has impacted software development practices. Various sites allow users to give various tags to software systems. In this study, we would like to complement the study by McMillan et al. by leveraging another source of information aside from API usage patterns, namely software tags. We have performed a user study involving several participants and the results show that collaborative tagging is a promising source of information useful for detecting similar software applications. I

    Introduction of an Assistance System to Support Domain Experts in Programming Low-code to Leverage Industry 5.0

    Full text link
    The rapid technological leaps of Industry 4.0 increase the pressure and demands on humans working in automation, which is one of the main motivators of Industry 5.0. In particular, automation software development for mechatronic systems becomes increasingly challenging, as both domain knowledge and programming skills are required for high-quality, maintainable software. Especially for small companies from automation and robotics without dedicated software engineering departments, domain-specific low-code platforms become indispensable that enable domain experts to develop code intuitively using visual programming languages, e.g., for tasks such as retrofitting mobile machines. However, for extensive functionalities, visual programs may become overwhelming due to the scaling-up problem. In addition, the ever-shortening time-to-market increases the time pressure on programmers. Thus, an assistance system concept is introduced that can be implemented by low-code platform suppliers based on combining data mining and static code analysis. Domain experts are supported in developing low-code by targeted recommendations, metric-based complexity measurement, and reducing complexity by encapsulating functionalities. The concept is implemented for the industrial low-code platform HAWE eDesign to program hydraulic components in mobile machines, and its benefits are confirmed in a user study and an industrial expert workshop.Comment: 8 pages, https://ieeexplore.ieee.org/abstract/document/983945

    SimNav: Simulink navigation of model clone classes

    Full text link
    SimNav is a GUI designed for displaying and navigating clone classes of Simulink models detected by the model clone detector Simone. As an embedded Simulink interface tool, SimNav allows model developers to explore detected clones directly in their own model development environment rather than a separate research tool interface. SimNav allows users to open selected models for side-by-side comparison, in order to visually explore clone classes and view the differences in the clone instances, as well as to explore the context in which the clones exist. This tool paper describes the motivation, implementation, and use cases for SimNav

    AN EFFICIENT METHOD-LEVEL CODE CLONE DETECTION SCHEME THROUGH TEXTUAL ANALYSIS USING METRICS

    Get PDF
    ABSTRACT Code cloning or the act of copying code fragments and making minor, non-functional alterations, is a well known problem for evolving software systems which leads to duplicated code fragments known as code clones. A Clone Detection approach is to find out the reused fragment of code in any application to maintain different types of clones that are being identified by the clone detection techniques. Ever since clone detection evolved, it has been providing better results by reducing the complexity. A different clone detection tool makes the detection process easier and produces efficient results. In many existing systems, main focus is on line by line detection or token based detection to find out the clones in the system. So, it makes the system to take long time to process the entire source code. If the fragment of code is not an exact copy but the functionalities make it similar to each other, then existing system doesn't figure out that type of clones in it. This paper proposes combination of textual and metric analysis of a source code for the detection of all types of clones in a given set of fragment of java source code. Various semantics have been formulated and their values are used during the detection process. This metrics with textual analysis provides less complexity in finding the clones and giving accurate results

    Characterizing and Detecting Duplicate Logging Code Smells

    Get PDF
    Developers rely on software logs for a wide variety of tasks, such as debugging, testing, program comprehension, verification, and performance analysis. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this thesis, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers’ understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems: Hadoop, CloudStack, ElasticSearch, and Cassandra. We uncovered five patterns of duplicate logging code smells. For each instance of the code smell, we further manually identify the problematic (i.e., require fixes) and justifiable (i.e., do not require fixes) cases. Then, we contact developers in order to verify our manual study result. We integrated our manual study result and developers’ feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the four manually studied systems and four additional systems: Kafka, Flink, Camel and Wicket. In total, combining the results of DLFinder and our manual analysis, we reported 91 problematic code smell instances to developers and all of them have been fixed. This thesis provides an initial step on creating a logging guideline for developers to improve the quality of logging code. DLFinder is also able to detect duplicate logging code smells with high precision and recall

    Model clone detection in practice

    Full text link
    fortiss gGmb
    corecore