114 research outputs found

    Classification of Language Interactions

    Get PDF
    Context: the presence of several languages interacting each other within the same project is an almost universal feature in software development. Earlier work shows that this interaction might be source of problems. Goal: we aim at identifying and characterizing the cross-language interactions at semantic level.% among artifacts written in different languages. Method: we took the commits of an open source project and analyzed the cross-language pairs of files occurring in the same commit to identify possible semantic interactions. We both defined a taxonomy and applied it. Result: we identify 6 categories of semantic interactions. The most common category is the one based on shared ids, the next is when an artifact provides a description of another artifact. Conclusions: the deeper knowledge of cross-language interactions represents the basis for implementing a tool supporting the management of this kind of interactions and the detection of related problems at compile time

    Model analytics and management

    Get PDF

    Model analytics and management

    Get PDF

    Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    Get PDF
    This research investigates how aggregation is currently conducted for simulation of large systems. The purpose is to examine how to achieve suitable aggregation in the simulation of large systems. More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model. The focus is on the exploration of the different aggregation techniques for hierarchical lower-level (higher resolution) models into the next higher-level. We develop aggregation procedures between two simulation levels (e.g., aggregation of engagement level models into a mission level model) to address how much and what information needs to pass from the high resolution to the low-resolution model in order to preserve statistical fidelity. We present a mathematical representation of the simulation model based on network theory and procedures for simulation aggregation that are logical and executable. This research examines the effectiveness of several statistical techniques, to include regression and three types of artificial neural networks, as an aggregation technique in predicting outputs of the lower-level model and evaluating its effects as an input into the next higher-level model. The proposed process is a collection of various conventional statistical and aggregation techniques, to include one novel concept and extensions to the regression and neural network methods, which are compared to the truth simulation model, where the truth model is when actual lower-level model outputs are used as a direct input into the next higher-level model. The aggregation methodology developed in this research provides an analytic foundation that formally defines the necessary steps essential in appropriately and effectively simulating large hierarchical systems

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    FLAME: a Formal Framework for the Automated Analysis of Software Product Lines Validated by Automated Specification Testing

    Get PDF
    Artículo publicado on-line el 14/12/2015.In a literature review on the last 20 years of automated analysis of feature models, the formalization of analysis operations was identified as the most relevant challenge in the field. This formalization could provide very valuable assets for tool developers such as a precise definition of the analysis operations and, what is more, a reference implementation, i.e. a trustworthy, not necessarily efficient implementation to compare different tools outputs. In this article, we present the FLAME framework as the result of facing this challenge. FLAME is a formal framework that can be used to formally specify not only feature models, but other variability modeling languages (VMLs) as well. This reusability is achieved by its two-layered architecture. The abstract foundation layer is the bottom layer in which all VML-independent analysis operations and concepts are specified. On top of the foundation layer, a family of characteristic model layers-one for each VML to be formally specified-can be developed by redefining some abstract types and relations. The verification and validation of FLAME has followed a process in which formal verification has been performed traditionally by manual theorem proving, but validation has been performed by integrating our experience on metamorphic testing of variability analysis tools, something that has shown to be much more effective than manually-designed test cases. To follow this automated, test-based validation approach, the specification of FLAME, written in Z, was translated into Prolog and 20,000 random tests were automatically generated and executed. Tests results helped to discover some inconsistencies not only in the formal specification, but also in the previous informal definitions of the analysis operations and in current analysis tools. After this process, the Prolog implementation of FLAME is being used as a reference implementation for some tool developers, some analysis operations have been formally specified for the first time with more generic semantics, and more VMLs are being formally specified using FLAME.Junta de Andalucía P12-TIC-1867Ministerio de Economía y Competitividad TIN2012-32273Junta de Andalucía TIC-5906Ministerio de Economía y Competitividad IPT-2012-0890-

    PREPROCESAMIENTO DE DATOS ESTRUCTURADOS

    Get PDF
    El propósito del preprocesamiento de datos es principalmente corregir las inconsistencias de los datos que serán la base de análisis en procesos de minería de datos. En el caso de las fuentes de datos estructuradas, el propósito no es distinto y pueden ser aplicadas diversas técnicas estadísticas y de aprendizaje computacional. Con el preprocesamiento de datos se pretende que los datos que van a ser utilizados en tareas de análisis o descubrimiento de conocimiento conserven su coherencia. A lo largo del presente artículo, se realizará la descripción de diferentes técnicas existentes junto con algunos algoritmos asociados a tareas destacadas de preprocesamiento de datos estructurados como limpieza y transformación. Luego, se hace una revisión de algunos algoritmos asociados a las técnicas utilizadas con más frecuencia, lo cual podrá permitir la comparación de su efectividad dependiendo del conjunto de datos utilizado, en trabajos futuros

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    Review of System Design Frameworks

    Get PDF
    In the last decade, the enormous development of the semiconductor industry with ever-increasing complexities of digital embedded systems and strong market competition with fast time-to-market and low design cost demands have imposed serious difficulty to a conventional design method. Therefore, there emerges a new design flow named model-based system design, which is based on high-level abstraction models, heavy design automation, and extensive component reuse to increase productivity and satisfy the market pressure. This thesis presents reviews of ten high level academic system design frameworks and tools that have been proposed and implemented recently to support the model based design flow, namely System-on-Chip Environment (SCE), Embedded System Environment (ESE), Metropolis, Daedalus, SystemCoDesigner (SCD), xPilot, GAUT, No-Instruction-Set Computer (NISC), Formal System Design (ForSyDe), and Ptolemy II. These tools are then compared to each other in various aspects comprising objective, technique, implementation and capability. Following that, three design flow frameworks, including ESE, Daedalus, and SystemCoDesigner, are experimented for their real usage, performance and practicality. The frameworks and tools implementing the model-based design flow all show promising results. Modelling tools (ForSyDe, and Ptolemy II) can sufficiently capture a wide range of complicated modern systems, while high-level synthesis tools (xPilot, GAUT, and NISC) produce better design qualities in terms of area, power, and cost in comparison to traditional works. Study cases of design flow frameworks (SCE, ESE, Metropolis, Daedalus, and SCD) show the model-based method significantly reduces developing time as well as facilitates the system design process. However, most of these tools and frameworks are being incomplete, and still under the experimental stage. There still be a lot of works needed until the method can be put into practice
    corecore