229 research outputs found
Cascaded refactoring for framework development and evolution
This thesis addresses three problems of framework development and evolution: identification and realization of variability, framework evolution, and framework documentation. A solution, called the cascaded refactoring methodology, is proposed. The methodology is validated by a case study, the Know-It-All framework for relational Database Management Systems. The cascaded refactoring methodology views framework development as framework evolution, which consists of framework refactoring followed by framework extension. A framework is specified by a set of models: feature model, use case model, architectural model, design model, and source code. Framework refactoring is achieved by a set of refactorings cascaded from the feature model, to use case model, architectural model, design model, and source code. The constraints of refactorings on a model are derived from the refactorings performed on its previous model. Alignment maps are defined to maintain the traceability amongst the models. The thesis broadens the refactoring concept from the design and source code level to include the feature model, use case model, and architectural model. Metamodels and refactorings are defined for the feature model and architectural model. A document template is proposed to document the framework refactorin
Detecting and Refactoring Operational Smells within the Domain Name System
The Domain Name System (DNS) is one of the most important components of the
Internet infrastructure. DNS relies on a delegation-based architecture, where
resolution of names to their IP addresses requires resolving the names of the
servers responsible for those names. The recursive structures of the inter
dependencies that exist between name servers associated with each zone are
called dependency graphs. System administrators' operational decisions have far
reaching effects on the DNSs qualities. They need to be soundly made to create
a balance between the availability, security and resilience of the system. We
utilize dependency graphs to identify, detect and catalogue operational bad
smells. Our method deals with smells on a high-level of abstraction using a
consistent taxonomy and reusable vocabulary, defined by a DNS Operational
Model. The method will be used to build a diagnostic advisory tool that will
detect configuration changes that might decrease the robustness or security
posture of domain names before they become into production.Comment: In Proceedings GaM 2015, arXiv:1504.0244
Recovering Grammar Relationships for the Java Language Specification
Grammar convergence is a method that helps discovering relationships between
different grammars of the same language or different language versions. The key
element of the method is the operational, transformation-based representation
of those relationships. Given input grammars for convergence, they are
transformed until they are structurally equal. The transformations are composed
from primitive operators; properties of these operators and the composed chains
provide quantitative and qualitative insight into the relationships between the
grammars at hand. We describe a refined method for grammar convergence, and we
use it in a major study, where we recover the relationships between all the
grammars that occur in the different versions of the Java Language
Specification (JLS). The relationships are represented as grammar
transformation chains that capture all accidental or intended differences
between the JLS grammars. This method is mechanized and driven by nominal and
structural differences between pairs of grammars that are subject to
asymmetric, binary convergence steps. We present the underlying operator suite
for grammar transformation in detail, and we illustrate the suite with many
examples of transformations on the JLS grammars. We also describe the
extraction effort, which was needed to make the JLS grammars amenable to
automated processing. We include substantial metadata about the convergence
process for the JLS so that the effort becomes reproducible and transparent
Verifying big data topologies by-design: a semi-automated approach
Big data architectures have been gaining momentum in recent years. For instance, Twitter uses stream processing frameworks like Apache Storm to analyse billions of tweets per minute and learn the trending topics. However, architectures that process big data involve many different components interconnected via semantically different connectors. Such complex architectures make possible refactoring of the applications a difficult task for software architects, as applications might be very different with respect to the initial designs. As an aid to designers and developers, we developed OSTIA (Ordinary Static Topology Inference Analysis) that allows detecting the occurrence of common anti-patterns across big data architectures and exploiting software verification techniques on the elicited architectural models. This paper illustrates OSTIA and evaluates its uses and benefits on three industrial-scale case-studies
SQLCheck: Automated Detection and Diagnosis of SQL Anti-Patterns
The emergence of database-as-a-service platforms has made deploying database
applications easier than before. Now, developers can quickly create scalable
applications. However, designing performant, maintainable, and accurate
applications is challenging. Developers may unknowingly introduce anti-patterns
in the application's SQL statements. These anti-patterns are design decisions
that are intended to solve a problem, but often lead to other problems by
violating fundamental design principles.
In this paper, we present SQLCheck, a holistic toolchain for automatically
finding and fixing anti-patterns in database applications. We introduce
techniques for automatically (1) detecting anti-patterns with high precision
and recall, (2) ranking the anti-patterns based on their impact on performance,
maintainability, and accuracy of applications, and (3) suggesting alternative
queries and changes to the database design to fix these anti-patterns. We
demonstrate the prevalence of these anti-patterns in a large collection of
queries and databases collected from open-source repositories. We introduce an
anti-pattern detection algorithm that augments query analysis with data
analysis. We present a ranking model for characterizing the impact of
frequently occurring anti-patterns. We discuss how SQLCheck suggests fixes for
high-impact anti-patterns using rule-based query refactoring techniques. Our
experiments demonstrate that SQLCheck enables developers to create more
performant, maintainable, and accurate applications.Comment: 18 pages (14 page paper, 1 page references, 2 page Appendix), 12
figures, Conference: SIGMOD'2
Scalable secure multi-party network vulnerability analysis via symbolic optimization
Threat propagation analysis is a valuable tool in improving the cyber resilience of enterprise networks. As
these networks are interconnected and threats can propagate not only within but also across networks, a holistic view of the entire network can reveal threat propagation trajectories unobservable from within a single enterprise. However, companies are reluctant to share internal vulnerability measurement data as it is highly sensitive and (if leaked) possibly damaging. Secure Multi-Party Computation (MPC) addresses this concern. MPC is a cryptographic technique that allows distrusting parties to compute analytics over their joint data while protecting its confidentiality. In this work we apply MPC to threat propagation analysis on large, federated networks. To address the prohibitively high performance cost of general-purpose MPC we develop two novel applications of optimizations that can be leveraged to execute many relevant graph algorithms under MPC more efficiently: (1) dividing the computation into separate stages such that the first stage is executed privately by each party without MPC and the second stage is an MPC computation dealing with a much smaller shared network, and (2) optimizing the second stage by
treating the execution of the analysis algorithm as a symbolic expression that can be optimized to reduce the number of costly operations and subsequently executed under MPC.We evaluate the scalability of this technique by analyzing the potential for threat propagation on examples of network graphs and propose several directions along which this work can be expanded
Experiences on Managing Technical Debt with Code Smells and AntiPatterns
Technical debt has become a common metaphor for the accumulation of software design and implementation choices that seek fast initial gains but that are under par and counterproductive in the long run. However, as a metaphor, technical debt does not offer actionable advice on how to get rid of it. To get to a practical level in solving problems, more focused mechanisms are needed. Commonly used approaches for this include identifying code smells as quick indications of possible problems in the codebase and detecting the presence of AntiPatterns that refer to overt, recurring problems in design. There are known remedies for both code smells and AntiPatterns. In paper, our goal is to show how to effectively use common tools and the existing body of knowledge on code smells and AntiPatterns to detect technical debt and pay it back. We present two main results: (i) How a combination of static code analysis and manual inspection was used to detect code smells in a codebase leading to the discovery of AntiPatterns; and (ii) How AntiPatterns were used to identify, characterize, and fix problems in the software. The experiences stem from a private company and its long-lasting software product development effort.Peer reviewe
- …