144 research outputs found

    Lifecycle and generational application of automated updates to MDA EIS applications

    Get PDF
    EIS applications are complex and present significant costs and issues during upgrades which can lead user organisations to defer or abandon potential upgrades and cause them to miss out on the business benefits of the upgrade. Our ongoing development of temporal meta-data EIS applications [1] seeks to avoid or minimise the majority of these upgrade issues by standardising all update procedures to become an updated set or stream of meta-data changes that will be sequentially applied to implement each individual meta-data change in order, for all changes between the previous and current meta-data models. This update process removes the need from vendors to produce version specific update programs, and fully automates the end user’s meta-data EIS application update processes. Collision detection with third party customisations to meta-data EIS application, known as Variant Logic, will be greatly simplified as any potential conflict will be precisely identified in advance, reducing any compatibility effort for the customisations and ensuring timely availability for inclusion with the streamlined meta-data update. The effort for major EIS updates can be drastically reduced from often months down to days or less with the meta-data update process. Our ongoing development of temporal meta-data EIS applications [1] seeks to avoid or minimise the majority of these upgrade issues by standardising all update procedures to become an updated set or stream of meta-data changes that will be sequentially applied to implement each individual meta-data change in order, for all changes between the previous and current meta-data models. This update process removes the need from vendors to produce version specific update programs, and fully automates the end user’s meta-data EIS application update processes.Collision detection with third party customisations to meta-data EIS application, known as Variant Logic, will be greatly simplified as any potential conflict will be precisely identified in advance, reducing any compatibility effort for the customisations and ensuring timely availability for inclusion with the streamlined meta-data update. The effort for major EIS updates can be drastically reduced from often months down to days or less with the meta-data update process

    Temporal meta-model framework for Enterprise Information Systems (EIS) development

    Get PDF
    This thesis has developed a Temporal Meta-Model Framework for semi-automated Enterprise System Development, which can help drastically reduce the time and cost to develop, deploy and maintain Enterprise Information Systems throughout their lifecycle. It proposes that the analysis and requirements gathering can also perform the bulk of the design phase, stored and available in a suitable model which would then be capable of automated execution with the availability of a set of specific runtime components

    Testing of xtUML Models across Auto-Reflexive Software Architecture

    Get PDF
    Application of MDA in the software development enables a synchronization of the system models and corresponding source files used for the building of the executable version of a software system. Because of often use of manual modifications of some parts of code without equivalent changes in connected models, there is no guarantee that the output of the process of building of the target application will be consistent with the relevant design and implementation models. Possibility of generating of the source files from the models is a necessity, but not a sufficient condition in the process of development and modification of software systems synchronously with the changes in all related models.  More safe approach is building the target application with the use of an automated building process with nested steps for consistency verifications of all critical models and related source files and the usage of model compilers. This article describes the method and tools for extending the software process of building the target system using special files with specification of dependencies between models and source files. Such dependencies represent the core of the critical knowledge, and it is possible to make this knowledge an integral part of the proposed new software architecture

    Framework for the Integration of Mobile Device Features in PLM

    Get PDF
    Currently, companies have covered their business processes with stationary workstations while mobile business applications have limited relevance. Companies can cover their overall business processes more time-efficiently and cost-effectively when they integrate mobile users in workflows using mobile device features. The objective is a framework that can be used to model and control business applications for PLM processes using mobile device features to allow a totally new user experience

    Framework for the Integration of Mobile Device Features in PLM

    Get PDF
    Currently, companies have covered their business processes with stationary workstations while mobile business applications have limited relevance. Companies can cover their overall business processes more time-efficiently and cost-effectively when they integrate mobile users in workflows using mobile device features. The objective is a framework that can be used to model and control business applications for PLM processes using mobile device features to allow a totally new user experience

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Proceedings of the 4th International Conference on Principles and Practices of Programming in Java

    Full text link
    This book contains the proceedings of the 4th international conference on principles and practices of programming in Java. The conference focuses on the different aspects of the Java programming language and its applications

    Improving the transition of a successful tender from estimating to project management phases

    Get PDF
    Research indicates that in Australia, there is a gap in the knowledge regarding a specific set of processes which should be employed by construction companies for the gathering and transfer of tacit project knowledge between estimating and project management teams at tender handover. It has been identified by the project sponsor, that the current processes in place for the capturing, codification, and transfer of project knowledge from estimating to project management teams are out of date and do not reflect the current practices of the estimating or construction teams. They have identified this as an area of interest for the implementation of process updates and improvements. This project aimed to identify and design process improvements that would enable the effective and efficient management of knowledge, and it transfer between estimating and project management teams. To understand the relevant literature pertaining to the topic, an extensive literature review was undertaken. It identified that that there are proprietary software systems available to assist in the management of project information. However, there are limitations to their application specifically for the capture or management of tacit knowledge. It identified that effective knowledge management is critical to the success or failure of construction projects. Value stream mapping was identified as an appropriate lean construction tools to form the basis of the project methodology in order to improve the transition of a successful tender from estimating to project management phases. The five phases of value stream mapping are the initial analysis, mapping the current state, mapping the future state, developing the action plan, and testing. Once the inefficiencies were identified within the existing system, potential improvements were identified. A revised tender management procedure and flowchart were designed, which worked in conjunction with the integrated workflow solution (IWS). The proposed IWS was formatted as an excel spreadsheet (named the Tender Knowledge Register or TKR), with different tabs representing each phase in the tender process, with prompts to assist the estimating team in identifying and codifying reusable project knowledge. All captured knowledge was electronically filed, in addition to the tender documentation, with a guideline provided for the transfer of all knowledge and information to the project management teams, upon successful conversion of a tender to a project. The proposed solution was evaluated through an analysis of quantitative survey results from the key project stakeholders to determine the system’s relevance both to the project sponsor and the wider construction industry. Results were measured against the key performance indicators of increased efficiency and effectiveness (quality of information). The results were generally positive. The estimating team considered the revised processes to be more relevant to their current practises than the existing processes. This is evident in the increased overall relevance rating from 31.1% to 71.1%. The time required to complete the processes was similar, with the existing process taking an estimated 13 hours, and the revised taking an estimated 13.77 hours. The increase in estimated time did not dramatically affect the estimating team’s overall ratings of the system. The average rating was 5.94/10. The project teams rated the system higher overall than the estimating team, with an average rating of 6.47/10. It is proposed that the higher rating provided by the project teams is due to the system being tailored to suit their knowledge and information requirements at project startup. The project teams considered the information which has been included in the tender knowledge register (TKR) as very important. The overall average importance rating of captured information and knowledge, and its presentation within the TKR was 7.8/10. The scope of the project was limited to the knowledge and information transfer between estimating and project management teams. It was determined that focusing on more than stage of the project lifecycle would exceed the scope and intent of this undergraduate research project. It was noted that the response rates to surveys was poor. The lower-than-expected response rates to surveys resulted in a higher margin of error (ranging from 13.56% to 22.89% @ 95% confidence) relating to the accuracy of responses. It is recommended that the proposed systems undergo further testing by real-time application to tenders. This will aid in the measurement of the effectiveness, efficiency, and quality of the information and knowledge transfer. Further testing of the process (potentially with compulsory participation from stakeholders), combined with minor refinements, has the potential to produce a powerful knowledge management system that can be implement to continuously improve knowledge transfer between estimating and project management teams. The system would fulfil the requirements of both the sponsor, and with minor adjustments, other organisations in the industry

    Multi-attribute tradespace exploration for survivability

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 235-249).Survivability is the ability of a system to minimize the impact of a finite-duration disturbance on value delivery (i.e., stakeholder benefit at cost), achieved through (1) the reduction of the likelihood or magnitude of a disturbance, (2) the satisfaction of a minimally acceptable level of value delivery during and after a disturbance, and/or (3) a timely recovery. Traditionally specified as a requirement in military systems, survivability is an increasingly important consideration for all engineering systems given the proliferation of natural and artificial threats. Although survivability is an emergent system property that arises from interactions between a system and its environment, conventional approaches to survivability engineering are reductionist in nature. Furthermore, current methods neither accommodate dynamic threat environments nor facilitate stakeholder communication for conducting trade-offs among system lifecycle cost, mission utility, and operational survivability. Multi-Attribute Tradespace Exploration (MATE) for Survivability is introduced as a system analysis methodology to improve the generation and evaluation of survivable alternatives during conceptual design. MATE for Survivability applies decision theory to the parametric modeling of thousands of design alternatives across representative distributions of disturbance environments. To improve the generation of survivable alternatives, seventeen empirically-validated survivability design principles are introduced. The general set of design principles allows the consideration of structural and behavioral strategies for mitigating the impact of disturbances over the lifecycle of a given encounter.(cont.) To improve the evaluation of survivability, value-based metrics are introduced for the assessment of survivability as a dynamic, continuous, and path-dependent system property. Two of these metrics, time-weighted average utility loss and threshold availability, are used to evaluate survivability based on the relationship between stochastic utility trajectories of system state and stakeholder expectations across nominal and perturbed environments. Finally, the survivability "tear(drop)" tradespace is introduced to enable the identification of inherently survivable architectures that efficiently balance performance metrics of cost, utility, and survivability. The internal validity and prescriptive value of the design principles, metrics, and tradespaces comprising MATE for Survivability are established through applications to the designs of an orbital transfer vehicle and a satellite radar system.by Matthew G. Richards.Ph.D
    • …
    corecore