9 research outputs found

    Software System Understanding via Architectural Views Extraction According to Multiple Viewpoints

    Get PDF
    International audienceChanges and evolution of software systems constantly gener- ate new challenges for the recovery of software systems architectures. A system's architecture, together with its elements and the way they inter- act, constitute valuable assets for understanding the system. We believe that offering multiple architectural views of a given system, using domain and pattern knowledge enhance understanding of the software system as a whole. To correlate different sources of information and existing soft- ware system, different viewpoints are considered. Viewpoints enable one to model such information and guide the extraction algorithms to ex- tract multiple architectural views. We propose a recursive framework, an approach that expresses different kinds of information as viewpoints to guide the extraction process. These multiple viewpoints models al- low considering architectural, conceptual, and structural aspects of the system

    Heuristics for Discovering Architectural Violations

    Get PDF
    International audienceSoftware architecture conformance is a key software quality control activity that aims to reveal the progressive gap normally observed between concrete and planned software architectures. In this paper, we present ArchLint, a lightweight approach for architecture conformance based on a combination of static and historical source code analysis. For this purpose, ArchLint relies on four heuristics for detecting both absences and divergences in source code based architectures. We applied ArchLint in an industrial-strength system and as a result we detected 119 architectural violations, with an overall precision of 46.7% and a recall of 96.2%, for divergences. We also evaluated ArchLint with four open-source systems, used in an independent study on reflexion models. In this second study, ArchLint achieved precision results ranging from 57.1% to 89.4%

    Preparing Software Re-Engineering via Freehand Sketches in Virtual Reality

    Get PDF
    Re-architecting a software system requires significant preparation, e.g., to scope and design new modules with their boundaries and constituent classes. When planning an intended future state of a system as a re-engineering goal, engineers often fall recur to mechanisms such as freehand sketching (using a whiteboard). While this ensures flexibility and expressiveness, the sketches remain disconnected from the source code. The alternative, tool-supported diagramming on the other hand considerably restricts flexibility and impedes free-form communication.We present a method for preparing the architectural software re-engineering via freehand sketches in virtual reality (VR) that can be seamlessly integrated with the model structure of a software visualization and, thus, also the code of a system, for productive use: Engineers explore a subject system in the immersive visualization, while freehand sketching their insights and plans. Our concept automatically interprets sketched shapes and connects them to the system’s source code, and superimposes code-level references into a sketch to support engineers in reflecting on their sketches.We evaluated our method in an iterative interview-based case study with software developers from four different companies, where they planned a hypothetical re-engineering of an opensource software system.Video Demonstration — https://youtu.be/NKC5YpH3n4

    Real-Time Reflexion Modelling in architecture reconciliation: A multi case study

    Get PDF
    Context Reflexion Modelling is considered one of the more successful approaches to architecture reconciliation. Empirical studies strongly suggest that professional developers involved in real-life industrial projects find the information provided by variants of this approach useful and insightful, but the degree to which it resolves architecture conformance issues is still unclear. Objective This paper aims to assess the level of architecture conformance achieved by professional architects using Reflexion Modelling, and to determine how the approach could be extended to improve its suitability for this task. Method An in vivo, multi-case-study protocol was adopted across five software systems, from four different financial services organizations. Think-aloud, video-tape and interview data from professional architects involved in Reflexion Modelling sessions were analysed qualitatively. Results This study showed that (at least) four months after the Reflexion Modelling sessions less than 50% of the architectural violations identified were removed. The majority of participants who did remove violations favoured changes to the architectural model rather than to the code. Participants seemed to work off two specific architectural templates, and interactively explored their architectural model to focus in on the causes of violations, and to assess the ramifications of potential code changes. They expressed a desire for dependency analysis beyond static-source-code analysis and scalable visualizations. Conclusion The findings support several interesting usage-in-practice traits, previously hinted at in the literature. These include (1) the iterative analysis of systems through Reflexion models, as a precursor to possible code change or as a focusing mechanism to identify the location of architecture conformance issues, (2) the extension of the approach with respect to dependency analysis of software systems and architectural modelling templates, (3) improved visualization support and (4) the insight that identification of architectural violations in itself does not lead to their removal in the majority of instances.This work was supported, in part, by Science Foundation Ireland Grants 12/IP/1351 and 10/CE/I1855 to Lero – the Irish Software Engineering Research Centre (www.lero.ie) and by the University of Brighton under the Rising Star Scheme awarded to Nour Ali

    Introduction of static quality analysis in small- and medium-sized software enterprises: experiences from technology transfer

    Get PDF
    Today, small- and medium-sized enterprises (SMEs) in the software industry face major challenges. Their resource constraints require high efficiency in development. Furthermore, quality assurance (QA) measures need to be taken to mitigate the risk of additional, expensive effort for bug fixes or compensations. Automated static analysis (ASA) can reduce this risk because it promises low application effort. SMEs seem to take little advantage of this opportunity. Instead, they still mainly rely on the dynamic analysis approach of software testing. In this article, we report on our experiences from a technology transfer project. Our aim was to evaluate the results static analysis can provide for SMEs as well as the problems that occur when introducing and using static analysis in SMEs. We analysed five software projects from five collaborating SMEs using three different ASA techniques: code clone detection, bug pattern detection and architecture conformance analysis. Following the analysis, we applied a quality model to aggregate and evaluate the results. Our study shows that the effort required to introduce ASA techniques in SMEs is small (mostly below one person-hour each). Furthermore, we encountered only few technical problems. By means of the analyses, we could detect multiple defects in production code. The participating companies perceived the analysis results to be a helpful addition to their current QA and will include the analyses in their QA process. With the help of the Quamoco quality model, we could efficiently aggregate and rate static analysis results. However, we also encountered a partial mismatch with the opinions of the SMEs. We conclude that ASA and quality models can be a valuable and affordable addition to the QA process of SMEs

    Security-Pattern Recognition and Validation

    Get PDF
    The increasing and diverse number of technologies that are connected to the Internet, such as distributed enterprise systems or small electronic devices like smartphones, brings the topic IT security to the foreground. We interact daily with these technologies and spend much trust on a well-established software development process. However, security vulnerabilities appear in software on all kinds of PC(-like) platforms, and more and more vulnerabilities are published, which compromise systems and their users. Thus, software has also to be modified due to changing requirements, bugs, and security flaws and software engineers must more and more face security issues during the software design; especially maintenance programmers must deal with such use cases after a software has been released. In the domain of software development, design patterns have been proposed as the best-known solutions for recurring problems in software design. Analogously, security patterns are best practices aiming at ensuring security. This thesis develops a deeper understanding of the nature of security patterns. It focuses on their validation and detection regarding the support of reviews and maintenance activities. The landscape of security patterns is diverse. Thus, published security patterns are collected and organized to identify software-related security patterns. The description of the selected software-security patterns is assessed, and they are compared against the common design patterns described by Gamma et al. to identify differences and issues that may influence the detection of security patterns. Based on these insights and a manual detection approach, we illustrate an automatic detection method for security patterns. The approach is implemented in a tool and evaluated in a case study with 25 real-world Android applications from Google Play

    Einheitliche Gütemaße für Clusterings, Layouts und Orderings von Graphen, und deren Anwendung als Software-Entwurfskriterien

    Get PDF
    How good is a given graph clustering, graph layout, or graph ordering --specifically, how well does it group densely connected vertices and separate sparsely connected vertices? How good is a given software design -- specifically, how well does it minimize the interdependence of the subsystems? This work introduces and validates simple and uniform measures for these two properties. Together with existing optimization algorithms, the introduced measures enable the automatic computation e.g. of communities in social networks and of design flaws in software systems. The first part derives, validates, and unifies quality measures for graph clusterings, graph layouts, and graph orderings, with the following results: - Identical quality measures can be applied to clusterings, layouts, and orderings; this enables the computation of consistent clusterings, layouts, and orderings. - Diverse existing and new measures can be unified into few general measures; this facilitates their comparison and validation. - Many existing measures are biased towards certain clusterings, layouts, or orderings, even for graphs without particularly dense or sparse subgraphs, and thus do not (only) measure quality in the above sense. - For example graphs, the minimization of new, unbiased (or weakly biased) measures reveals nonobvious groups, e.g. communities in social networks, subject areas in hypertexts, or closely interlocked countries in international trade. The second part derives, validates, and unifies dependency-based indicators of software design quality. It applies two quality measures for graph clusterings as measures for the coupling of software subsystems -- specifically for the coupling indicated by common changes and for the coupling indicated by references -- and shows: - The measures quantify the dependency-caused development costs, under well-defined simplifying assumptions. - The minimization of the measures conforms to existing dependency-related design principles (like locality of change, acyclicity of references, and stability of references), design rules, and design patterns. - For example software systems, the incremental minimization of the measures reveals nonobvious design flaws, like the distribution of coherent responsibilities over several subsystems, or references from low-level to high-level subsystems. In summary, this work shows that - simple measures can suffice to capture important aspects of graph clustering quality, graph layout quality, graph ordering quality, and software design quality, and - the optimization of simple measures can suffice to detect nonobvious and often useful structure in various real-world systems.Wie gut ist ein Graph-Clustering, Graph-Layout oder Graph-Ordering -- insbesondere, wie gut gruppiert es dicht verbundene Knoten? Wie gut ist ein Software-Entwurf -- insbesondere, wie gut minimiert er die Abhängigkeiten zwischen Subsystemen? Für diese beiden Eigenschaften definiert und validiert die vorliegende Arbeit einfache und einheitliche Maße. Zusammen mit existierenden Optimierungsalgorithmen ermöglichen diese Maße die automatische Entdeckung z.B. von kohäsiven Communities in sozialen Netzwerken und von Entwurfsfehlern in Software-Systemen. Der erste Teil definiert, validiert und vereinheitlicht Gütemaße für Graph-Clusterings, Graph-Layouts und Graph-Orderings, mit folgenden Ergebnissen: - Identische Gütemaße können auf Clusterings, Layouts und Orderings angewendet werden. Dies ermöglicht die Berechnung von konsistenten Clusterings, Layouts und Orderings. - Viele existierende und neue Gütemaße können zu wenigen allgemeinen Maßen vereinheitlicht werden; dies erleichtert ihren Vergleich und ihre Validierung. - Viele existierende Maße messen nicht (nur) Güte im obigen Sinne, da sie selbst für Graphen ohne ungewöhnlich dichte oder dünne Teilgraphen bestimmte Clusterings, Layouts oder Orderings bevorzugen. - Durch Optimierung verbesserter Maße lassen sich nicht-offensichtliche Gruppen in vielen realen Systemen finden, z.B. Communities in sozialen Netzwerken, Themengebiete in Hypertexten, und Integrationsräume in der Weltwirtschaft. Der zweite Teil definiert, validiert und vereinheitlicht abhängigkeitsbasierte Indikatoren für Software-Entwurfsqualität. Er verwendet zwei Gütemaße für Graph-Clusterings als Maße für die Kopplung von Software-Subsystemen -- insbesondere für Kopplung, deren Symptom gemeinsame Änderungen sind und für Kopplung, deren Ursache Referenzen sind -- und zeigt: - Die Maße quantifizieren die durch Abhängigkeiten verursachten Entwicklungskosten, unter vereinfachenden Annahmen. - Die Optimierung der Maße impliziert anerkannte Entwurfsprinzipien (z.B. Lokalität von Änderungen, Azyklizität von Referenzen, und Stabilität von Referenzen), Entwurfsregeln und Entwurfsmuster. - Durch Optimierung der Maße lassen sich nicht-offensichtliche Entwurfsfehler finden, z.B. die Verteilung kohärenter Verantwortlichkeiten über mehrere Subsysteme, oder Referenzen von allgemeinen zu speziellen Subsystemen. Zusammenfassend zeigt die Arbeit, dass - einfache Maße ausreichen, um wichtige Aspekte der Qualität von Graph-Clusterings, Graph-Layouts, Graph-Orderings und Software-Entwürfen zu formalisieren, und - die Optimierung einfacher Maße ausreicht, um nicht-offensichtliche und nützliche Struktur in verschiedensten Systemen zu finden

    Scalable Automated Incrementalization for Real-Time Static Analyses

    Get PDF
    This thesis proposes a framework for easy development of static analyses, whose results are incrementalized to provide instantaneous feedback in an integrated development environment (IDE). Today, IDEs feature many tools that have static analyses as their foundation to assess software quality and catch correctness problems. Yet, these tools often fail to provide instantaneous feedback and are thus restricted to nightly build processes. This precludes developers from fixing issues at their inception time, i.e., when the problem and the developed solution are both still fresh in mind. In order to provide instantaneous feedback, incrementalization is a well-known technique that utilizes the fact that developers make only small changes to the code and, hence, analysis results can be re-computed fast based on these changes. Yet, incrementalization requires carefully crafted static analyses. Thus, a manual approach to incrementalization is unattractive. Automated incrementalization can alleviate these problems and allows analyses writers to formulate their analyses as queries with the full data set in mind, without worrying over the semantics of incremental changes. Existing approaches to automated incrementalization utilize standard technologies, such as deductive databases, that provide declarative query languages, yet also require to materialize the full dataset in main-memory, i.e., the memory is permanently blocked by the data required for the analyses. Other standard technologies such as relational databases offer better scalability due to persistence, yet require large transaction times for data. Both technologies are not a perfect match for integrating static analyses into an IDE, since the underlying data, i.e., the code base, is already persisted and managed by the IDE. Hence, transitioning the data into a database is redundant work. In this thesis a novel approach is proposed that provides a declarative query language and automated incrementalization, yet retains in memory only a necessary minimum of data, i.e., only the data that is required for the incrementalization. The approach allows to declare static analyses as incrementally maintained views, where the underlying formalism for incrementalization is the relational algebra with extensions for object-orientation and recursion. The algebra allows to deduce which data is the necessary minimum for incremental maintenance and indeed shows that many views are self-maintainable, i.e., do not require to materialize memory at all. In addition an optimization for the algebra is proposed that allows to widen the range of self-maintainable views, based on domain knowledge of the underlying data. The optimization works similar to declaring primary keys for databases, i.e., the optimization is declared on the schema of the data, and defines which data is incrementally maintained in the same scope. The scope makes all analyses (views) that correlate only data within the boundaries of the scope self-maintainable. The approach is implemented as an embedded domain specific language in a general-purpose programming language. The implementation can be understood as a database-like engine with an SQL-style query language and the execution semantics of the relational algebra. As such the system is a general purpose database-like query engine and can be used to incrementalize other domains than static analyses. To evaluate the approach a large variety of static analyses were sampled from real-world tools and formulated as incrementally maintained views in the implemented engine
    corecore