59 research outputs found

    A pattern language for evolution reuse in component-based software architectures

    Get PDF
    Context: Modern software systems are prone to a continuous evolution under frequently varying requirements and changes in operational environments. Architecture-Centric Software Evolution (ACSE) enables changes in a system’s structure and behaviour while maintaining a global view of the software to address evolution-centric trade-offs. Lehman’s law of continuing change demands for long-living and continuously evolving architectures to prolong the productive life and economic value of software. Also some industrial research shows that evolution reuse can save approximately 40% effort of change implementation in ACSE process. However, a systematic review of existing research suggests a lack of solution(s) to support a continuous integration of reuse knowledge in ACSE process to promote evolution-off-the-shelf in software architectures. Objectives: We aim to unify the concepts of software repository mining and software evolution to discover evolution-reuse knowledge that can be shared and reused to guide ACSE. Method: We exploit repository mining techniques (also architecture change mining) that investigates architecture change logs to discover change operationalisation and patterns. We apply software evolution concepts (also architecture change execution) to support pattern-driven reuse in ACSE. Architecture change patterns support composition and application of a pattern language that exploits patterns and their relations to express evolution-reuse knowledge. Pattern language composition is enabled with a continuous discovery of patterns from architecture change logs and formalising relations among discovered patterns. Pattern language application is supported with an incremental selection and application of patterns to achieve reuse in ACSE. The novelty of the research lies with a framework PatEvol that supports a round-trip approach for a continuous acquisition (mining) and application (execution) of reuse knowledge to enable ACSE. Prototype support enables customisation and (semi-) automation for the evolution process. Results: We evaluated the results based on the ISO/IEC 9126 - 1 quality model and a case study based validation of the architecture change mining and change execution processes. We observe consistency and reusability of change support with pattern-driven architecture evolution. Change patterns support efficiency for architecture evolution process but lack a fine-granular change implementation. A critical challenge lies with the selection of appropriate patterns to form a pattern language during evolution. Conclusions: The pattern language itself continuously evolves with an incremental discovery of new patterns from change logs over time. A systematic identification and resolution of change anti-patterns define the scope for future research

    An approach to enacting business process models in support of the life cycle of integrated manufacturing systems

    Get PDF
    The complexity of enterprise engineering processes requires the application of reference architectures as means of guiding the achievement of an adequate level of business integration. This research aims to address important aspects of this requirement by associating the formalism of reference architectures to various life cycle phases of integrating manufacturing systems (IMS) and enabling their use in addressing contemporary system engineering issues. In pursuit of this aim, the following research activities were carried out: (1) to devise a framework which supports key phases of the IMS life cycle and (2) to populate part of this framework with an initial combination of architectures which can be encapsulated into a computer-aided systems engineering environment. This has led to the creation of a workbench capable of providing support for modelling, analysis, simulation, rapid-prototyping, configuration and run-time operation of an IMS, based on a consistent set of models associated with the engineering processes involved. The research effort concentrated on selecting and investigating the use of appropriate formalisms which underpin a selection of architectures and tools (i. e. CIM-OSA, Petrinets, object-oriented methods and CIM-BIOSYS), this by designing, implementing, applying and testing the workbench. The main contribution of this research is to demonstrate that it is possible to retain an adequate level of formalism, via computational structures and models, which extend through the IMS life cycle from a conceptual description of the system through to actions that the system performs when operating. The underlying methodology which supported this contribution is based on enacting models of system behaviour which encode important coordination aspects of manufacturing systems. The strategy for demonstrating the incorporation of formalism to the IMS life cycle was to enable the aggregation into a workbench of knowledge of 'what' the system is expected to achieve (i. e. 'problems' to be addressed) and 'how' the system can achieve it (i. e possible 'solutions'). Within the workbench, such a knowledge is represented through an amalgamation of business process modelling and object-oriented modelling approaches which, when adequately manipulated, can lead to business integration

    Software Patterns and Architecture Under Examination Hammer: An Approach to the Consolidation of Interdisciplinary Knowledge

    Get PDF
    Software engineering is normally perceived, and even defined, based upon applicability of scientific and technical knowledge, in order to provide solutions to different challenges. The bright side of engineering concepts in general, is the continuous process of acquiring knowledge and skills needed to develop and make adjustments to various systems, in respect to helping humankind. An important phase of this process is ”Architecting”, which is the big picture of any intended systems. While good architecture leads to successful systems, bad architecture can result in misfortune. In this thesis, my proposition is to investigate, in depth, both theoretical (academic) and industry domains, regarding the way in which they treat Software Pattern (SP), Software Architecture (SA), and Software Architecture Evaluation (SAE) techniques. I argue that the process of creating, evaluating, and documenting SPs and SA with no common guidelines, standards, and frameworks, will result in unused and conflicted information within their areas, which finally will impact the software engineering field. While the employment of interdisciplinary knowledge (such as SPs, modelling techniques, description languages, evaluation methods, standards, and frameworks), could elevate SA development and validation methodologies, and increase its utilisation within the software engineering community. The goal here is to help build better systems, which could be improved by developing suitable SA, and evaluate its qualities by proper methods and tools, before further development, which should save time as well as money. Therefore, after a long process of analysing the current-state-of-the-art, I have introduced in this thesis novel findings concerning descriptions, relationships, documentation, and utilisation in relation to SA, SAE, and SPs, through employing several investigatory techniques, including comparisons between reliable references, questionnaires, field study, and case study. The investigation of SPs resulted in creating a database as a partial solution, in order to minimise their confusion within the literature, concerning their definitions, categorisations, and relationships with different quality attributes Quality Attribute (QA)s; also, to introduce the information in a proper fashion for users, which includes the required data that supports comparisons between pattern references, and to facilitate their selection processes. The issues, gaps, limitations, inconsistencies, and conflicts within current SA, QAs, and SPs discovered by this study, such as their poor description and the ignorance of them by developers during software development, has led to important recommendations, as well as suggestions for future research. The required information from different sectors (government, academia and industry) regarding SPs, SA, SAE, and modelling languages, has been gathered, and analysed through two surveys and a field study. The strong relationships and influences between the aforementioned areas were introduced and proven by a case study analysis for the Real-time Control System Real-time Control System (RCS) reference architecture, followed by introducing a conceptual paradigm that aimed to improve and generalise the Moreno et al. [2008] performance model. The outcomes from this thesis provide the basis for future work. Also, the information from different interdisciplinary knowledge merged to form new concepts for SA evaluation, which are recommended for future study

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Design: the quintessential business transaction

    Get PDF
    The fundamental structures that underpin business activities must evolve and change in order to equip companies to thrive in a market whose characteristics are increasing competition and instability. The incremental advances in applied computing technology and business methodologies which focus on improving one aspect of company operations ignore the need for an underlying structure and model through which to engage any and all functions in a consistent and integrated fashion. Indeed, many exacerbate the problem through closed architectures, isolationist views of entity data storage and rigid methodologies imposed on the company that employs them. The Product Model proposed fulfils that role. It is a model of the processes and entities that a company uses to conduct its business, at all levels and across all departments. Two other concepts are exposed: product model data and the design history record. Product model data are the values of instances of product model entities and relations, created to represent a particular design, artefact or object. The design history record captures the data and functions used in a transaction and the order and context in which they are used. To exercise these concepts, a software suite was written, the Glasgow Utility for Integrated Design, Guide. It supports the definition of a proud model and its subsequent use in the creation of product model data. Each interaction with the system is recorded, thus capturing the design history record, which can subsequently be processes to various advantageous ends. The major such uses are for re-use of part information in other designs and the extraction of design best practice with which to augment the company's design methodology. It is a comprehensive record, since all business processes are supported by, and can be transacted through Guide. Guide has been used to validate the adequacy of the product model and has established many benefits through its use. Applications in many spheres are possible; engineering has been the primary focus for exemplars and case studies. The development was carried out under the scrutiny of constant validation and testing in live situations with several industrial partners. Guide is built on industry standard tools and uses relational database technology to store frame-based representations of entities, methods and relationships. The design of project plans is carried out on the same platform used to support the project itself; the design data are not dissociated from the project controlling mechanism. Resources, including staff, are engaged according to requirements and audit mechanisms allow for constant re-evaluation of the project development. Control and communication mechanisms support applications in an extended enterprise environment and the distribution of resources that this entails

    Principles of sensorimotor control and learning in complex motor tasks

    Get PDF
    The brain coordinates a continuous coupling between perception and action in the presence of uncertainty and incomplete knowledge about the world. This mapping is enabled by control policies and motor learning can be perceived as the update of such policies on the basis of improving performance given some task objectives. Despite substantial progress in computational sensorimotor control and empirical approaches to motor adaptation, to date it remains unclear how the brain learns motor control policies while updating its internal model of the world. In light of this challenge, we propose here a computational framework, which employs error-based learning and exploits the brain’s inherent link between forward models and feedback control to compute dynamically updated policies. The framework merges optimal feedback control (OFC) policy learning with a steady system identification of task dynamics so as to explain behavior in complex object manipulation tasks. Its formalization encompasses our empirical findings that action is learned and generalised both with regard to a body-based and an object-based frame of reference. Importantly, our approach predicts successfully how the brain makes continuous decisions for the generation of complex trajectories in an experimental paradigm of unfamiliar task conditions. A complementary method proposes an expansion of the motor learning perspective at the level of policy optimisation to the level of policy exploration. It employs computational analysis to reverse engineer and subsequently assess the control process in a whole body manipulation paradigm. Another contribution of this thesis is to associate motor psychophysics and computational motor control to their underlying neural foundation; a link which calls for further advancement in motor neuroscience and can inform our theoretical insight to sensorimotor processes in a context of physiological constraints. To this end, we design, build and test an fMRI-compatible haptic object manipulation system to relate closed-loop motor control studies to neurophysiology. The system is clinically adjusted and employed to host a naturalistic object manipulation paradigm on healthy human subjects and Friedreich’s ataxia patients. We present methodology that elicits neuroimaging correlates of sensorimotor control and learning and extracts longitudinal neurobehavioral markers of disease progression (i.e. neurodegeneration). Our findings enhance the understanding of sensorimotor control and learning mechanisms that underlie complex motor tasks. They furthermore provide a unified methodological platform to bridge the divide between behavior, computation and neural implementation with promising clinical and technological implications (e.g. diagnostics, robotics, BMI).Open Acces

    Dependable compositions : a formal approach

    Get PDF
    Design processes for most engineering disciplines are based on component reuse. In much the same way as the need for customizable reuse of software fueled the growth and development of object-oriented programming languages over module-based languages, the same driving force for component-based solutions is leading to object-oriented languages being transcended by component-based composition languages. Existing declarative programming languages are ideally suited to the construction of software components, but are inappropriate for specifying compositions of components in a high level manner. Indeed several composition environments exist that are built on top of object-oriented languages though they fail to supply the level of abstraction required to specify compositions of components. This is particularly true when the components are black boxes. In order to reuse a black box component, an accurate and unambiguous description of the component's functionality must exist. It is doubtful that natural language can fulfil this requirement. This thesis advocates a formal approach to specifying a component and demonstrates that this approach will aid in the composition and verification of component based systems. The thesis presents a general solution to the problem by defining the formal semantics for a composition of components. Building on this work, a formal definition of exceptional component behaviour is provided along with a formal reasoning about component dependability. These then form the basis for the formal definition of a composition specification language and theoretical declarative compositional programming language. Such a language would afford the programmer the tools required to construct a dynamic composition of components.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, MARFL, and JOOIP bound by the underlying intensional programming paradigm

    JTorX: Exploring Model-Based Testing

    Get PDF
    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time
    corecore