2,529 research outputs found

    Evolving Legacy System\u27s Features into Fine-grained Components Using Regression Test-Cases

    Get PDF
    Because many software systems used for business today are considered legacy systems, the need for software evolution techniques has never been greater. We propose a novel evolution methodology for legacy systems that integrates the concepts of features, regression testing, and Component-Based Software Engineering (CBSE). Regression test suites are untapped resources that contain important information about the features of a software system. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. The unique combination of Feature Engineering and CBSE makes it possible for a legacy system to be modernized quickly and affordably. We develop a new framework to evolve legacy software that maps the features to software components refactored from their feature implementation. In this dissertation, we make the following contributions: First, a new methodology to evolve legacy code is developed that improves the maintainability of evolved legacy systems. Second, the technique describes a clear understanding between features and functionality, and relationships among features using our feature model. Third, the methodology provides guidelines to construct feature-based reusable components using our fine-grained component model. Fourth, we bridge the complexity gap by identifying feature-based test cases and developing feature-based reusable components. We show how to reuse existing tools to aid the evolution of legacy systems rather than re-writing special purpose tools for program slicing and requirement management. We have validated our approach on the evolution of a real-world legacy system. By applying this methodology, American Financial Systems, Inc. (AFS), has successfully restructured its enterprise legacy system and reduced the costs of future maintenance

    Optimization Modulo Theories with Linear Rational Costs

    Full text link
    In the contexts of automated reasoning (AR) and formal verification (FV), important decision problems are effectively encoded into Satisfiability Modulo Theories (SMT). In the last decade efficient SMT solvers have been developed for several theories of practical interest (e.g., linear arithmetic, arrays, bit-vectors). Surprisingly, little work has been done to extend SMT to deal with optimization problems; in particular, we are not aware of any previous work on SMT solvers able to produce solutions which minimize cost functions over arithmetical variables. This is unfortunate, since some problems of interest require this functionality. In the work described in this paper we start filling this gap. We present and discuss two general procedures for leveraging SMT to handle the minimization of linear rational cost functions, combining SMT with standard minimization techniques. We have implemented the procedures within the MathSAT SMT solver. Due to the absence of competitors in the AR, FV and SMT domains, we have experimentally evaluated our implementation against state-of-the-art tools for the domain of linear generalized disjunctive programming (LGDP), which is closest in spirit to our domain, on sets of problems which have been previously proposed as benchmarks for the latter tools. The results show that our tool is very competitive with, and often outperforms, these tools on these problems, clearly demonstrating the potential of the approach.Comment: Submitted on january 2014 to ACM Transactions on Computational Logic, currently under revision. arXiv admin note: text overlap with arXiv:1202.140

    Integration of Data Mining into Scientific Data Analysis Processes

    Get PDF
    In recent years, using advanced semi-interactive data analysis algorithms such as those from the field of data mining gained more and more importance in life science in general and in particular in bioinformatics, genetics, medicine and biodiversity. Today, there is a trend away from collecting and evaluating data in the context of a specific problem or study only towards extensively collecting data from different sources in repositories which is potentially useful for subsequent analysis, e.g. in the Gene Expression Omnibus (GEO) repository of high throughput gene expression data. At the time the data are collected, it is analysed in a specific context which influences the experimental design. However, the type of analyses that the data will be used for after they have been deposited is not known. Content and data format are focused only to the first experiment, but not to the future re-use. Thus, complex process chains are needed for the analysis of the data. Such process chains need to be supported by the environments that are used to setup analysis solutions. Building specialized software for each individual problem is not a solution, as this effort can only be carried out for huge projects running for several years. Hence, data mining functionality was developed to toolkits, which provide data mining functionality in form of a collection of different components. Depending on the different research questions of the users, the solutions consist of distinct compositions of these components. Today, existing solutions for data mining processes comprise different components that represent different steps in the analysis process. There exist graphical or script-based toolkits for combining such components. The data mining tools, which can serve as components in analysis processes, are based on single computer environments, local data sources and single users. However, analysis scenarios in medical- and bioinformatics have to deal with multi computer environments, distributed data sources and multiple users that have to cooperate. Users need support for integrating data mining into analysis processes in the context of such scenarios, which lacks today. Typically, analysts working with single computer environments face the problem of large data volumes since tools do not address scalability and access to distributed data sources. Distributed environments such as grid environments provide scalability and access to distributed data sources, but the integration of existing components into such environments is complex. In addition, new components often cannot be directly developed in distributed environments. Moreover, in scenarios involving multiple computers, multiple distributed data sources and multiple users, the reuse of components, scripts and analysis processes becomes more important as more steps and configuration are necessary and thus much bigger efforts are needed to develop and set-up a solution. In this thesis we will introduce an approach for supporting interactive and distributed data mining for multiple users based on infrastructure principles that allow building on data mining components and processes that are already available instead of designing of a completely new infrastructure, so that users can keep working with their well-known tools. In order to achieve the integration of data mining into scientific data analysis processes, this thesis proposes an stepwise approach of supporting the user in the development of analysis solutions that include data mining. We see our major contributions as the following: first, we propose an approach to integrate data mining components being developed for a single processor environment into grid environments. By this, we support users in reusing standard data mining components with small effort. The approach is based on a metadata schema definition which is used to grid-enable existing data mining components. Second, we describe an approach for interactively developing data mining scripts in grid environments. The approach efficiently supports users when it is necessary to enhance available components, to develop new data mining components, and to compose these components. Third, building on that, an approach for facilitating the reuse of existing data mining processes based on process patterns is presented. It supports users in scenarios that cover different steps of the data mining process including several components or scripts. The data mining process patterns support the description of data mining processes at different levels of abstraction between the CRISP model as most general and executable workflows as most concrete representation

    ARPA Whitepaper

    Get PDF
    We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange

    Systematic construction of goal-oriented COTS taxonomies

    Get PDF
    El proceso de construir software a partir del ensamblaje e integración de soluciones de software pre-fabricadas, conocidas como componentes COTS (Comercial-Off-The-Shelf) se ha convertido en una necesidad estratégica en una amplia variedad de áreas de aplicación. En general, los componentes COTS son componentes de software que proveen una funcionalidad específica, que están disponibles en el mercado para ser adquiridos e integrados dentro de otros sistemas de software. Los beneficios potenciales de esta tecnología son principalmente la reducción de costes y el acortamiento del tiempo de desarrollo, a la vez que fomenta la calidad. Sin embargo, numerosos retos que van desde problemas técnicos y legales deben ser afrontados para adaptar las actividades tradicionales de ingeniería de software para explotar los beneficios del uso de COTS para el desarrollo de sistemas.Actualmente, existe un incrementalmente enorme mercado de componentes COTS; así, una de las actividades más críticas en el desarrollo de sistemas basados en COTS es la selección de componentes que deben ser integrados en el sistema a desarrollar. La selección está básicamente compuesta de dos procesos principales: La búsqueda de componentes candidatos en el mercado y su posterior evaluación con respecto a los requisitos del sistema. Desafortunadamente, la mayoría de los métodos existentes para seleccionar COTS, se enfocan en el proceso de evaluación, dejando de lado el problema de buscar los componentes en el mercado. La búsqueda de componentes en el mercado no es una tarea trivial, teniendo que afrontar varias características del mercado de COTS, tales como su naturaleza dispersa y siempre creciente, cambio y evolución constante; en este contexto, la obtención de información de calidad acerca de los componentes no es una tarea fácil. Como consecuencia, el proceso de selección de COTS se ve seriamente dañado. Además, las alternativas tradicionales de reuso también carecen de soluciones apropiadas para reusar componentes COTS y el conocimiento adquirido en cada proceso de selección. Esta carencia de propuestas es un problema muy serio que incrementa los riesgos de los proyectos de selección de COTS, además de hacerlos ineficientes y altamente costosos. Esta disertación presenta el método GOThIC (Goal- Oriented Taxonomy and reuse Infrastructure Construction) enfocado a la construcción de infraestructuras de reuso para facilitar la búsqueda y reuso de componentes COTS. El método está basado en el uso de objetivos para construir taxonomías abstractas, bien fundamentadas y estables para lidiar con las características del mercado de COTS. Los nodos de las taxonomías son caracterizados por objetivos, sus relaciones son declaradas como dependencias y varios artefactos son construidos y gestionados para promover la reusabilidad y lidiar con la evolución constante.El método GOThIC ha sido elaborado a través de un proceso iterativo de investigación-acción para identificar los retos reales relacionados con el proceso de búsqueda de COTS. Posteriormente, las soluciones posibles fueron evaluadas e implementadas en varios casos de estudio en el ámbito industrial y académico en diversos dominios. Los resultados más relevantes fueron registrados y articulados en el método GOThIC. La evaluación industrial preliminar del método se ha llevado a cabo en algunas compañías en Noruega.The process of building software systems by assembling and integrating pre-packaged solutions in the form of Commercial-Off-The-Shelf (COTS) software components has become a strategic need in a wide variety of application areas. In general, COTS components are software components that provide a specific functionality, available in the market to be purchased, interfaced and integrated into other software systems. The potential benefits of this technology are mainly its reduced costs and shorter development time, while maintaining the quality. Nevertheless, many challenges ranging form technical to legal issues must be faced for adapting the traditional software engineering activities in order to exploit these benefits.Nowadays there is an increasingly huge marketplace of COTS components; therefore, one of the most critical activities in COTS-based development is the selection of the components to be integrated into the system under development. Selection is basically composed of two main processes, namely: searching of candidates from the marketplace and their evaluation with respect to the system requirements. Unfortunately, most of the different existing methods for COTS selection focus their efforts on evaluation, letting aside the problem of searching components in the marketplace. Searching candidate COTS is not an easy task, having to cope with some challenging marketplace characteristics related to its widespread, evolvable and growing nature; and the lack of available and well-suited information to obtain a quality-assured search. Indeed, traditional reuse approaches also lack of appropriate solutions to reuse COTS components and the knowledge gained in each selection process. This lack of proposals is a serious drawback that makes the whole selection process highly risky, and often expensive and inefficient. This dissertation introduces the GOThIC (Goal- Oriented Taxonomy and reuse Infrastructure Construction) method aimed at building a domain reuse infrastructure for facilitating COTS components searching and reuse. It is based on goal-oriented approaches for building abstract, well-founded and stable taxonomies capable of dealing with the COTS marketplace characteristics. Thus, the nodes of these taxonomies are characterized by means of goals, their relationships declared as dependencies among them and several artifacts are constructed and managed for reusability and evolution purposes. The GOThIC method has been elaborated following an iterative process based on action research premises to identify the actual challenges related to COTS components searching. Then, possible solutions were envisaged and implemented by several industrial and academic case studies in different domains. Successful results were recorded to articulate the synergic GOThIC method solution, followed by its preliminary industrial evaluation in some Norwegian companies

    Collected software engineering papers, volume 9

    Get PDF
    This document is a collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) from November 1990 through October 1991. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. This is the ninth such volume of technical papers produced by the SEL. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. For the convenience of this presentation, the eight papers contained here are grouped into three major categories: (1) software models studies; (2) software measurement studies; and (3) Ada technology studies. The first category presents studies on reuse models, including a software reuse model applied to maintenance and a model for an organization to support software reuse. The second category includes experimental research methods and software measurement techniques. The third category presents object-oriented approaches using Ada and object-oriented features proposed for Ada. The SEL is actively working to understand and improve the software development process at GSFC

    Evaluating advanced search interfaces using established information-seeking model

    No full text
    When users have poorly defined or complex goals search interfaces offering only keyword searching facilities provide inadequate support to help them reach their information-seeking objectives. The emergence of interfaces with more advanced capabilities such as faceted browsing and result clustering can go some way to some way toward addressing such problems. The evaluation of these interfaces, however, is challenging since they generally offer diverse and versatile search environments that introduce overwhelming amounts of independent variables to user studies; choosing the interface object as the only independent variable in a study would reveal very little about why one design out-performs another. Nonetheless if we could effectively compare these interfaces we would have a way to determine which was best for a given scenario and begin to learn why. In this article we present a formative framework for the evaluation of advanced search interfaces through the quantification of the strengths and weaknesses of the interfaces in supporting user tactics and varying user conditions. This framework combines established models of users, user needs, and user behaviours to achieve this. The framework is applied to evaluate three search interfaces and demonstrates the potential value of this approach to interactive IR evaluation
    • …
    corecore