1,469 research outputs found

    Refactorings of Design Defects using Relational Concept Analysis

    Get PDF
    Software engineers often need to identify and correct design defects, ıe} recurring design problems that hinder development and maintenance\ud by making programs harder to comprehend and--or evolve. While detection\ud of design defects is an actively researched area, their correction---mainly\ud a manual and time-consuming activity --- is yet to be extensively\ud investigated for automation. In this paper, we propose an automated\ud approach for suggesting defect-correcting refactorings using relational\ud concept analysis (RCA). The added value of RCA consists in exploiting\ud the links between formal objects which abound in a software re-engineering\ud context. We validated our approach on instances of the <span class='textit'></span>Blob\ud design defect taken from four different open-source programs

    Multimodal Event Knowledge. Psycholinguistic and Computational Experiments

    Get PDF

    How to automatically document data with the codebook package to facilitate data reuse

    No full text

    Code smells detection and visualization: A systematic literature review

    Full text link
    Context: Code smells (CS) tend to compromise software quality and also demand more effort by developers to maintain and evolve the application throughout its life-cycle. They have long been catalogued with corresponding mitigating solutions called refactoring operations. Objective: This SLR has a twofold goal: the first is to identify the main code smells detection techniques and tools discussed in the literature, and the second is to analyze to which extent visual techniques have been applied to support the former. Method: Over 83 primary studies indexed in major scientific repositories were identified by our search string in this SLR. Then, following existing best practices for secondary studies, we applied inclusion/exclusion criteria to select the most relevant works, extract their features and classify them. Results: We found that the most commonly used approaches to code smells detection are search-based (30.1%), and metric-based (24.1%). Most of the studies (83.1%) use open-source software, with the Java language occupying the first position (77.1%). In terms of code smells, God Class (51.8%), Feature Envy (33.7%), and Long Method (26.5%) are the most covered ones. Machine learning techniques are used in 35% of the studies. Around 80% of the studies only detect code smells, without providing visualization techniques. In visualization-based approaches several methods are used, such as: city metaphors, 3D visualization techniques. Conclusions: We confirm that the detection of CS is a non trivial task, and there is still a lot of work to be done in terms of: reducing the subjectivity associated with the definition and detection of CS; increasing the diversity of detected CS and of supported programming languages; constructing and sharing oracles and datasets to facilitate the replication of CS detection and visualization techniques validation experiments.Comment: submitted to ARC

    Code smells detection and visualization: A systematic literature review

    Get PDF
    Context: Code smells (CS) tend to compromise software quality and also demand more effort by developers to maintain and evolve the application throughout its life-cycle. They have long been cataloged with corresponding mitigating solutions called refactoring operations. Objective: This SLR has a twofold goal: the first is to identify the main code smells detection techniques and tools discussed in the literature, and the second is to analyze to which extent visual techniques have been applied to support the former. Method: Over 83 primary studies indexed in major scientific repositories were identified by our search string in this SLR. Then, following existing best practices for secondary studies, we applied inclusion/exclusion criteria to select the most relevant works, extract their features and classify them. Results: We found that the most commonly used approaches to code smells detection are search-based (30.1%), and metric-based (24.1%). Most of the studies (83.1%) use open-source software, with the Java language occupying the first position (77.1%). In terms of code smells, God Class (51.8%), Feature Envy (33.7%), and Long Method (26.5%) are the most covered ones. Machine learning techniques are used in 35% of the studies. Around 80% of the studies only detect code smells, without providing visualization techniques. In visualization-based approaches, several methods are used, such as city metaphors, 3D visualization techniques. Conclusions: We confirm that the detection of CS is a non-trivial task, and there is still a lot of work to be done in terms of: reducing the subjectivity associated with the definition and detection of CS; increasing the diversity of detected CS and of supported programming languages; constructing and sharing oracles and datasets to facilitate the replication of CS detection and visualization techniques validation experiments.info:eu-repo/semantics/acceptedVersio

    Python for Archivists: Breaking Down Barriers Between Systems

    Get PDF
    [Excerpt] Working with a multitude of digital tools is now a core part of an archivist’s skillset. We work with collection management systems, digital asset management systems, public access systems, ticketing or request systems, local databases, general web applications, and systems built on smaller systems linked through application programming interfaces (APIs). Over the past years, more and more of these applications have evolved to meet a variety of archival processes. We no longer expect a single tool to solve all our needs and embraced the “separation of concerns” design principle that smaller, problem-specific and modular systems are more effective than large monolithic tools that try to do everything. All of this has made the lives of archivists easier and empowered us to make our collections more accessible to our users. Yet, this landscape can be difficult to manage. How do we get all of these systems that rely on different software and use data in different ways to talk to one another in ways that help, rather than hinder, our day to day tasks? How do we develop workflows that span these different tools while performing complex processes that are still compliant with archival theory and standards? How costly is it to maintain these relationships over time as our workflows evolve and grow? How do we make all these new methods simple and easy to learn for new professionals and keep archives from being even more esoteric

    ARCTIC-3D: Automatic Retrieval and ClusTering of Interfaces in Complexes from 3D structural information

    Get PDF
    The formation of a stable complex between proteins lies at the core of a wide variety of biological processes and has been the focus of countless experiments. The huge amount of information contained in the protein structural interactome in the Protein Data Bank can now be used to characterise and classify the existing biological interfaces. We here introduce ARCTIC-3D, a fast and user-friendly data mining and clustering software to retrieve data and rationalise the interface information associated with the protein input data. We demonstrate its use by various examples ranging from showing the increased interaction complexity of eukaryotic proteins, 20% of which on average have more than 3 different interfaces compared to only 10% for prokaryotes, to associating different functions to different interfaces. In the context of modelling biomolecular assemblies, we introduce the concept of “recognition entropy”, related to the number of possible interfaces of the components of a protein-protein complex, which we demonstrate to correlate with the modelling difficulty. The identified interface clusters can also be used to generate various combinations of interface-specific restraints for integrative modelling. The ARCTIC-3D software is freely available at https://github.com/haddocking/arctic3d and can be accessed as a web-service at https://wenmr.science.uu.nl/arctic-3

    On the Design of Social Media for Learning

    Get PDF
    This paper presents two conceptual models that we have developed for understanding ways that social media can support learning. One model relates to the “social” aspect of social media, describing the different ways that people can learn with and from each other, in one or more of three social forms: groups, networks and sets. The other model relates to the ‘media’ side of social media, describing how technologies are constructed and the roles that people play in creating and enacting them, treating them in terms of softness and hardness. The two models are complementary: neither provides a complete picture but, in combination, they help to explain how and why different uses of social media may succeed or fail and, as importantly, are intended to help us design learning activities that make most effective use of the technologies. We offer some suggestions as to how media used to support different social forms can be softened and hardened for different kinds of learning applications

    So, You Want to 3D Print a Landscape? An Outline of Some Methods

    Get PDF
    https://digitalcommons.cedarville.edu/alum_books/1435/thumbnail.jp
    • 

    corecore