1,847 research outputs found

    On opportunistic software reuse

    Get PDF
    The availability of open source assets for almost all imaginable domains has led the software industry toopportunistic design-an approach in which people develop new software systems in an ad hoc fashion by reusing and combining components that were not designed to be used together. In this paper we investigate this emerging approach. We demonstrate the approach with an industrial example in whichNode.jsmodules and various subsystems are used in an opportunistic way. Furthermore, to study opportunistic reuse as a phenomenon, we present the results of three contextual interviews and a survey with reuse practitioners to understand to what extent opportunistic reuse offers improvements over traditional systematic reuse approaches.Peer reviewe

    Software synthesis using generic architectures

    Get PDF
    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described

    Transaction Cost Management

    Get PDF
    All organizations, institutions, business processes, markets and strategies have one aim in common: the reduction of transaction costs. This aim is pursued relentlessly in practice, and has been perceived to bring about drastic changes, especially in the recent global market and the cyber economy. This book analyzes and describes “transactions” as a model, on the basis of which organizations, institutions and business processes can be appropriately shaped. It tracks transaction costs to enable a scientific approach instead of a widely used “state-of-the-art” approach, working to bridge the gap between theory and practice. This open access book analyzes and describes “transactions” as a model..

    Reinforcement learning for efficient network penetration testing

    Get PDF
    Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The proposed system is named Intelligent Automated Penetration Testing System (IAPTS) consisting of a module that integrates with industrial PT frameworks to enable them to capture information, learn from experience, and reproduce tests in future similar testing cases. IAPTS aims to save human resources while producing much-enhanced results in terms of time consumption, reliability and frequency of testing. IAPTS takes the approach of modeling PT environments and tasks as a partially observed Markov decision process (POMDP) problem which is solved by POMDP-solver. Although the scope of this paper is limited to network infrastructures PT planning and not the entire practice, the obtained results support the hypothesis that RL can enhance PT beyond the capabilities of any human PT expert in terms of time consumed, covered attacking vectors, accuracy and reliability of the outputs. In addition, this work tackles the complex problem of expertise capturing and re-use by allowing the IAPTS learning module to store and re-use PT policies in the same way that a human PT expert would learn but in a more efficient way

    Multi-level code comprehension model for large scale software, A

    Get PDF
    1996 Fall.Includes bibliographical references (pages 142-147).For the past 20 years researchers have studied how programmers understand code they did not write. Most of this research has concentrated on small-scale code understanding. We consider it necessary to design studies that observe programmers working on large-scale code in production environments. We describe the design and implementation of such a study which included 11 maintenance engineers working on various maintenance tasks. The objective is to build a theory based on observations of programmers working on real tasks. Results show that programmers understand code at different levels of abstraction. Expertise in the application domain, amount of prior experience with the code, and task can determine the types of actions taken during maintenance, the level of abstraction at which the programmer works, and the information needed to complete a maintenance task. A better grasp of how programmers understand large scale code and what is most efficient and effective can lead to better tools, better maintenance guidelines, and documentation

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    Cost-efficient digital twins for design space exploration: A modular platform approach

    Get PDF
    The industrial need to predict the behaviour of radically new products brings renewed interest in how to set up and make use of physical prototypes and testing. However, conducting physical testing of a large number of radical concepts is still a costly approach. This paper proposes an approach to actively use digital twins in the early phases where the design can be largely changed. The approach is based on creating a set of digital twin modules that can be reused and recomposed to create digital twin variants. However, this paper considers that developing a digital twin can be very costly. Therefore, the approach focuses on supporting the decisions about the optimal mix of modules, and about whether a new digital twin module should be developed. The approach is applied to an industrial case derived from the collaboration with two space manufacturers. The results highlight how the design of the modular platform has an impact on the cost of the digital twin, if commonality and reusability aspects are considered. These results point at the cost-efficiency of applying a modular approach to digital twin creation, as a means to reuse the results from physical testing to validate new designs and their ranges of validit

    Collaborative problem solving within supply chains: general framework, process and methodology

    Get PDF
    The Problem Solving Process is a central element of the firms' continuous improvement strategies. In this framework, a number of approaches have succeeded to demonstrate their effectiveness to tackle industrial problems. The list includes, but is not limited to PDCA, DMAICS, 7Steps and 8D/9S. However, the emergence and increasing emphasis in the supply chains have impacted the effectiveness of those methods to solve problems that go beyond the boundaries of a single firm and, in consequence, their ability to provide solutions when the contexts on which firms operate are distributed. This can be explained because not only the problems, but also the products, partners, skills, resources and pieces of evidence required to solve those problems are distributed, fragmented and decentralized across the network. This PhD thesis deals with the solving of industrial problems in supply chains based in collaboration. It develops a general framework for studying this paradigm, as well as both a generic process and a collaborative methodology able to deal with the process in practice. The proposal considers all the technical aspects (e.g. products modeling and network structure) and the collaborative aspects (e.g. the trust decisions and/or the power gaps between partners) that simultaneously impact the supply chain operation and the jointly solving of problems. Finally, this research work positions the experiential knowledge as a central lever of the problem solving process to contribute to the continuous improvement strategies at a more global level
    corecore