905 research outputs found

    Automated Predictive Diagnosis (APD): A 3-tiered shell for building expert systems for automated predictions and decision making

    Get PDF
    The APD software features include: On-line help, Three level architecture, (Logic environments, Setup/Application environment, Data environment), Explanation capability, and File handling. The kinds of experimentation and record keeping that leads to effective expert systems is facilitated by: (1) a library of inferencing modules (in the logic environment); (2) an explanation capability which reveals logic strategies to users; (3) automated file naming conventions; (4) an information retrieval system; and (5) on-line help. These aid with effective use of knowledge, debugging and experimentation. Since the APD software anticipates the logical rules becoming complicated, it is embedded in a production system language (CLIPS) to insure the full power of the production system paradigm of CLIPS and availability of the procedural language C. The development is discussed of the APD software and three example applications: toy, experimental, and operational prototype for submarine maintenance predictions

    Metaheuristic Design Patterns: New Perspectives for Larger-Scale Search Architectures

    Get PDF
    Design patterns capture the essentials of recurring best practice in an abstract form. Their merits are well established in domains as diverse as architecture and software development. They offer significant benefits, not least a common conceptual vocabulary for designers, enabling greater communication of high-level concerns and increased software reuse. Inspired by the success of software design patterns, this chapter seeks to promote the merits of a pattern-based method to the development of metaheuristic search software components. To achieve this, a catalog of patterns is presented, organized into the families of structural, behavioral, methodological and component-based patterns. As an alternative to the increasing specialization associated with individual metaheuristic search components, the authors encourage computer scientists to embrace the ‘cross cutting' benefits of a pattern-based perspective to optimization algorithms. Some ways in which the patterns might form the basis of further larger-scale metaheuristic component design automation are also discussed

    Investigating the role of knowledge management in driving the development of an effective business process architecture

    Get PDF
    Business Process Architecture (BPA) modelling methods are not dynamic and flexible enough to effectively respond to changes. This may create a barrier that contributes to a lack of knowledge and learning capabilities which can affect the BPA regarding its support for a sustainable competitive advantage in an organisation. New business challenges are driving business enterprises to adopt Knowledge Management (KM) as one means of making a positive difference to their performance and competitiveness. However, shortcomings still remain in utilising knowledge management in business processes where efforts were mostly directed towards the integration of knowledge management with business process management but not including BPAs. The idea of applying KM as a memory to be timely retrieved and updated as needed is no longer sufficient. The resource-based view suggests a number of key factors to be investigated and taken into consideration during the development of knowledge management systems. These key factors are known as Knowledge Management Enablers (KMEs). KMEs are crucial for representing KM and understanding how knowledge is created, shared and disseminated. They are also essential to identify available assets and resources, and to clarify how organisational capabilities are created and utilised.This research is aimed at investigating the role of the knowledge management enablers in the development of an effective process architecture. An effective process architecture needs to be dynamic and supports a sustainable competitive advantage in an organisation. Identifying the KMEs, selecting an appropriate BPA method, aligning these KMEs with this method as well as undertaking a critical evaluation of this alignment are the main objectives set for this research. In order to accomplish the research aim and objectives, a resource-based and semantic-enriched framework, namely the KMEOntoBPA has been designed using KMEs to drive the process of BPA development. Organisational structure, culture, information technology, leadership, knowledge context and business repository have been selected as representatives of the KMEs. The object-based BPA modelling, specifically the semantically enriched Riva BPA (srBPA) method, has been adopted in order to embrace the knowledge resources generated by KMEs and utilise them in the derivation and re-configuration of its constitutional elements. These knowledge resources are employed as business objects. They are considered as Candidate Essential Business Entities (CEBEs) in the Riva method, that characterise or represent a form of business of an organisation. The Design Science Research Methodology (DSRM) is used to guide the research phases with an emphasis on the design and development, demonstration and evaluation of the research framework. The KMEOntoBPA has been demonstrated using sufficient and representative core banking case studies of the Treasury, Deposits and Financing. These case studies have been applied to the DSRM iterations beginning with the Treasury as the 1st case study, followed by the Deposits and the Financing case studies.The results have revealed that KMEs utilisation provides an agile generation of representative CEBEs and their corresponding Riva BPA elements, which reflect the real business in each of the core banking business studies. This research also demonstrated the semantic Riva BPA method as an appropriate object-based method that is well aligned with KMEs in exploiting knowledge resources for the development of a dynamic BPA with reference to robustness and learning capabilities. In addition to these results, the research framework, i.e, the KMEOntoBPA has shown an understanding of the flow of knowledge in the bank and has provided several possible advantages such as the accuracy of service delivery and the improvement of the financial control. It also supports the sources of sustainable competitive advantage (SCA): technical capabilities, core competences and social capital.Finally, a number of significant contributions and artefacts have been attained. For example, there is the aKMEOnt which is the abstract ontology that utilises six KMEs in this research to investigate the effectiveness of using such KMEs in driving the development of the BPA. These contributions along with the research results provide a guide to future research directions such as using the aKMEOnt in the development of different business process modelling and deriving the Enterprise Information Architecture (EIA) and Service Oriented Architecture (SOA)

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    Software Process Modeling with Eclipse Process Framework

    Get PDF
    The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.Ohjelmistoprosessin mallinnus Eclipse Process Frameworkilla ja SPEM 2.0 metamallilla Ohjelmistot ja ohjelmistoteollisuus kehittyvät jatkuvasti. Ketterien menetelmien tulo 1990-luvun loppupuolella, uudet kehitystyökalut ja teknologiat vaativat yhä enemmän huomiota alalla työskenteleviltä ihmisiltä. Organisaatioilla on kuitenkin ollut sekalainen kirjo prosesseja ja erilaisia prosessikuvauskieliä, koska standardia kuvauskieltä ei ole ollut saatavilla. Prosessimetamalli SPEM 2.0 julkaistiin hiljattain. Tätä mallia hyödyntää mm. Eclipse Process Framework Composer (EPFC) –työkalu, joka on suunniteltu prosessien ja menetelmäsisällön kehittämiseen ja ylläpitoon. Työkalun tavoitteena on tukea useita erilaisia projektityyppejä ja kehitystyylejä. Tässä työssä esitellään seuraavat aiheet ja käsitteet: ohjelmistoprosessit, mallit, perinteiset ja ketterät lähestymistavat, metoditekniikkaa sekä prosessien kehittäminen. Lisäksi tutustutaan muutamiin tunnetuimmista metodologioista (RUP, OpenUP, OpenMethod, XP ja Scrum) ja vertaillaan näitä. Työssä tutkitaan tarkemmin Eclipse Process Framework Composer –työkalua, SPEM 2.0 metamallia, näiden ominaisuuksia, käyttöä sekä mallintamista. Esitän tutkimustulokset ja tutkimuksenkulun OpenMethodin mallintamisesta EPFC –työkalulla sekä SPEM 2.0 -metamallilla. Tulokset osoittavat, että uusi metamalli ja työkalu helpottavat prosessin ja menetelmäsisällön hallintaa, mahdollistavat räätälöityjen julkaisujen teon sisällöstä, sekä yhdistävät prosessin projektityökaluihin kuten MS Projectiin. Mallinnus voidaan lisäksi ymmärtää osana prosessin kehittämistä.Siirretty Doriast

    Trusted product lines

    Get PDF
    This thesis describes research undertaken into the application of software product line approaches to the development of high-integrity, embedded real-time software systems that are subject to regulatory approval/certification. The motivation for the research arose from a real business need to reduce cost and lead time of aerospace software development projects. The thesis hypothesis can be summarised as follows: It is feasible to construct product line models that allow the specification of required behaviour within a reference architecture that can be transformed into an effective product implementation, whilst enabling suitable supporting evidence for certification to be produced. The research concentrates on the following four main areas: 1. Construction of an argument framework in which the application of product line techniques to high-integrity software development can be assessed and critically reviewed. 2. Definition of a product-line reference architecture that can host components containing variation. 3. Design of model transformations that can automatically instantiate products from a set of components hosted within the reference architecture. 4. Identification of verification approaches that may provide evidence that the transformations designed in step 3 above preserve properties of interest from the product line model into the product instantiations. Together, these areas form the basis of an approach we term “Trusted Product Lines”. The approach has been evaluated and validated by deployment on a real aerospace project; the approach has been used to produce DO-178B/ED-12B Level A applications of over 300 KSLOC in size. The effect of this approach on the software development process has been critically evaluated in this thesis, both quantitatively (in terms of cost and relative size of process phases) and qualitatively (in terms of software quality). The “Trusted Product Lines” approach, as described within the thesis, shows how product line approaches can be applied to high-integrity software development, and how certification evidence created and arguments constructed for products instantiated from the product line. To the best of our knowledge, the development and effective application of product line techniques in a certification environment is novel and unique

    Rigorous object-oriented analysis

    Get PDF
    Object-oriented methods for analysis, design and programming are commonly used by software engineers. Formal description techniques, however, are mainly used in a research environment. We have investigated how rigour can be introduced into the analysis phase of the software development process by combining object-oriented analysis (OOA) methods with formal description techniques. The main topics of this investigation are a formal interpretation of the OOA constructs using LOTOS, a mathematical definition of the basic OOA concepts using a simple denotational semantics and a new method for object- oriented analysis that we call the Rigorous Object-Oriented Analysis method (ROOA). The LOTOS interpretation of the OOA concepts is an intrinsic part of the ROOA method. It was designed in such a way that software engineers with no experience in LOTOS, can still use ROOA. The denotational semantics of the concepts of object-oriented analysis illuminates the formal syntactic transformations within ROOA and guarantees that the basic object- oriented concepts can be understood independently of the specification language we use. The ROOA method starts from a set of informal requirements and an object model and produces a formal object-oriented analysis model that acts as a requirements specification. The resulting formal model integrates the static, dynamic and functional properties of a system in contrast to existing OOA methods which are informal and produce three separate models that are difficult to integrate and keep consistent. ROOA provides a systematic development process, by proposing a set of rules to be followed during the analysis phase. During the application of these rules, auxiliary structures are created to help in tracing the requirements through to the final formal model. As LOTOS produces executable specifications, prototyping can be used to check the conformance of the specification against the original requirements and to detect inconsistencies, omissions and ambiguities early in the development process

    On Experimentation in Software-Intensive Systems

    Get PDF
    Context: Delivering software that has value to customers is a primary concern of every software company. Prevalent in web-facing companies, controlled experiments are used to validate and deliver value in incremental deployments. At the same that web-facing companies are aiming to automate and reduce the cost of each experiment iteration, embedded systems companies are starting to adopt experimentation practices and leverage their activities on the automation developments made in the online domain. Objective: This thesis has two main objectives. The first objective is to analyze how software companies can run and optimize their systems through automated experiments. This objective is investigated from the perspectives of the software architecture, the algorithms for the experiment execution and the experimentation process. The second objective is to analyze how non web-facing companies can adopt experimentation as part of their development process to validate and deliver value to their customers continuously. This objective is investigated from the perspectives of the software development process and focuses on the experimentation aspects that are distinct from web-facing companies. Method: To achieve these objectives, we conducted research in close collaboration with industry and used a combination of different empirical research methods: case studies, literature reviews, simulations, and empirical evaluations. Results: This thesis provides six main results. First, it proposes an architecture framework for automated experimentation that can be used with different types of experimental designs in both embedded systems and web-facing systems. Second, it proposes a new experimentation process to capture the details of a trustworthy experimentation process that can be used as the basis for an automated experimentation process. Third, it identifies the restrictions and pitfalls of different multi-armed bandit algorithms for automating experiments in industry. This thesis also proposes a set of guidelines to help practitioners select a technique that minimizes the occurrence of these pitfalls. Fourth, it proposes statistical models to analyze optimization algorithms that can be used in automated experimentation. Fifth, it identifies the key challenges faced by embedded systems companies when adopting controlled experimentation, and we propose a set of strategies to address these challenges. Sixth, it identifies experimentation techniques and proposes a new continuous experimentation model for mission-critical and business-to-business. Conclusion: The results presented in this thesis indicate that the trustworthiness in the experimentation process and the selection of algorithms still need to be addressed before automated experimentation can be used at scale in industry. The embedded systems industry faces challenges in adopting experimentation as part of its development process. In part, this is due to the low number of users and devices that can be used in experiments and the diversity of the required experimental designs for each new situation. This limitation increases both the complexity of the experimentation process and the number of techniques used to address this constraint
    corecore