747,449 research outputs found

    Systematic Testing of Embedded Automotive Software - The Classification-Tree Method for Embedded Systems (CTM/ES)

    Get PDF
    The software embedded in automotive control systems increasingly determines the functionality and properties of present-day motor vehicles. The development and test process of the systems and the software embedded becomes the limiting factor. While these challenges, on the development side, are met by employing model-based specification, design, and implementation techniques [KCF+04], satisfactory solutions on the testing side are slow in arriving. With regard to the systematic selection (test design) and the description of test scenarios especially, there is a lot of room for improvement. Thus, a main goal is to effectively minimize these deficits by creating an efficient procedure for the selection and description of test scenarios for embedded automotive software and its integration in the model-based development process. The realization of this idea involves the combination of a classical software testing procedure with a technology, prevalent in the automotive industry, which is used for the description of time-dependent stimuli signals. The result of this combination is the classification-tree method for embedded systems, CTM/ES [Con04]. The classification-tree method for embedded systems complements model-based development by employing a novel approach to the systematic selection and description of the test scenarios for the software embedded in the control systems. CTM/ES allows for the graphic representation of time-variable test scenarios on different levels of abstraction: A problem-oriented, compact representation, adequate for a human tester and containing a high potential for reusability, is gradually being transformed into a solution-oriented technical representation which is suited for the test objects\u27 stimulation. The CTM/ES notation facilitates a consistent representation of test scenarios which may result from different test design techniques. The test design technique which this method is primarily based on, is a data-oriented partitioning of the input domain in equivalence classes. Secondary test design techniques are, for instance, the testing of specific values (or value courses) or requirement-based testing. A domain-specific application pragmatics in the form of agendas supports the methodical execution of individual test activities and the interaction of different test design techniques. The methodology description leads up to an effective test strategy for model-based testing, combining the classification-tree method for embedded systems with structural testing on the model level, and accommodating the different forms of representation of the test object during model-based development. Systems which have been developed in a model-based way can be tested systematically and efficiently by means of the CTM/ES and the tools based thereon, such as the classification-tree editor for embedded systems CTE/ES [CTE/ES], as well as the model-based test environment MTest [LBE+04, MTest]

    Towards a Model-Centric Software Testing Life Cycle for Early and Consistent Testing Activities

    Get PDF
    The constant improvement of the available computing power nowadays enables the accomplishment of more and more complex tasks. The resulting implicit increase in the complexity of hardware and software solutions for realizing the desired functionality requires a constant improvement of the development methods used. On the one hand over the last decades the percentage of agile development practices, as well as testdriven development increases. On the other hand, this trend results in the need to reduce the complexity with suitable methods. At this point, the concept of abstraction comes into play, which manifests itself in model-based approaches such as MDSD or MBT. The thesis is motivated by the fact that the earliest possible detection and elimination of faults has a significant influence on product costs. Therefore, a holistic approach is developed in the context of model-driven development, which allows applying testing already in early phases and especially on the model artifacts, i.e. it provides a shift left of the testing activities. To comprehensively address the complexity problem, a modelcentric software testing life cycle is developed that maps the process steps and artifacts of classical testing to the model-level. Therefore, the conceptual basis is first created by putting the available model artifacts of all domains into context. In particular, structural mappings are specified across the included domain-specific model artifacts to establish a sufficient basis for all the process steps of the life cycle. Besides, a flexible metamodel including operational semantics is developed, which enables experts to carry out an abstract test execution on the modellevel. Based on this, approaches for test case management, automated test case generation, evaluation of test cases, and quality verification of test cases are developed. In the context of test case management, a mechanism is realized that enables the selection, prioritization, and reduction of Test Model artifacts usable for test case generation. I.e. a targeted set of test cases is generated satisfying quality criteria like coverage at the model-level. These quality requirements are accomplished by using a mutation-based analysis of the identified test cases, which builds on the model basis. As the last step of the model-centered software testing life cycle two approaches are presented, allowing an abstract execution of the test cases in the model context through structural analysis and a form of model interpretation concerning data flow information. All the approaches for accomplishing the problem are placed in the context of related work, as well as examined for their feasibility by of a prototypical implementation within the Architecture And Analysis Framework. Subsequently, the described approaches and their concepts are evaluated by qualitative as well as quantitative evaluation. Moreover, case studies show the practical applicability of the approach

    Kinetic research on heterogeneously catalysed processes: a questionnaire on the state-of-the-art in industry

    Get PDF
    On the initiative of the Working Party `Chemical Engineering in the Applications of CatalysisÂż of the European Federation of Chemical Engineering an assessment of the issues in the determination and application of kinetic data within the European industry was performed. The basis of the analysis consisted of a questionnaire put together by researchers from Dow, DSM, Shell and Eindhoven University of Technology. The 24 companies, which have responded to the questionnaire, can be classified into four groups: chemical, oil, engineering contractors and catalyst manufacturers. From the overall input it appears that there are three, equally important, utilisation areas for kinetic data: process development, process optimisation and catalyst development. There is a wide variety of kinetic data sources. Most of the respondents make use of test units which were primarily designed for development and optimisation. Avoiding transport limitation is, certainly in the case of short range projects or for complex feedstocks, not always taken care of. With respect to the modelling approaches, a common philosophy is `as simple as possibleÂż. Most of the respondents state that `in principleÂż one should strive for intrinsic kinetics, but the majority nevertheless does for various reasons not separate all transport phenomena from reaction kinetics. Kinetic models are mostly simple first or nth order or Langmuir-Hinshelwood type expressions. More complex kinetic models are scarcely used. Three areas were frequently identified to offer opportunities for improvement. Gathering of kinetic data is too costly and time consuming. There is no systematic approach at all for determination and application of kinetics in case of unstable catalytic performance. Furthermore, the software available for the regression of kinetic data to rate equations based on mechanistic schemes as well as software to model reactors are insufficiently user friendly. The majority of the respondents state that the problems indicated should be solved by cooperation, e.g., between companies, between industry and academia and between the catalysis and the chemical engineering community. A workshop on the above topics was held in December 1996 with 15 companies and 6 academics attending. More information can be obtained from the secretariat of the Working Party

    Multi-dimensional scaling and MODELLER based evolutionary algorithms for protein model refinement

    Get PDF
    "December 2013.""A Thesis Presented to the Faculty of the Graduate School at the University of Missouri--Columbia In Partial Fulfillment of the Requirements for the Degree Master of Science."Thesis supervisor: Dr. Yi Shang.To computationally obtain an accurate prediction of the three-dimensional structure of a protein from its primary sequence is one of the most important problems in bioinformatics and has been actively researched for many years. Although a number of software packages have been developed and they sometimes perform well on template-based modeling, further improvement is needed for practical use. Model refinement is a step in the prediction process, in which improved structures are constructed based on a pool of initially generated models. Since the refinement category being added to the Critical Assessment of Structure Prediction (CASP) competition in 2008, CASP results show that it is a challenge for existing model refinement methods to improve model quality consistently. This project focuses on evolutionary algorithms for protein model refinement. Three new algorithms have been developed, in which multidimensional scaling (MDS), MODELLER, and a hybrid of both are used as crossover operators, respectively. The MDS-based method takes a purely geometrical approach and generates a child model by combining the contact maps of multiple parents. The MODELLER-based method takes a statistical and energy minimization approach and uses the remodeling module in MODELLER program to generate new models from multiple parents. The hybrid method first generates models using the MDS-based method and then run them through the MODELLER-based method, aiming at combining the strength of both. Promising IX results have been obtained in experiments using CASP datasets. The MDS-based method improved the best of a pool of predicted models in terms of the global distance test score (GDT-TS) in 9 out of 16 test targets. For instance, for target T0680, the GDT-TS of a refined model is 0.833, much better than 0.763, the value of the best model in the initial pool.Includes bibliographical references (pages 43-48)

    Model-based traceability

    Full text link
    Many organizations invest considerable cost and effort in building traceability matrices in order to comply with regulatory requirements or process improvement initiatives. Unfortunately, these matrices are frequently left un-used and project stakeholders continue to perform critical software engineering activities such as change impact analysis or requirements satisfaction assessment without benefit of the established traces. A major reason for this is the lack of a process framework and associated tools to support the use of these trace matrices in a strategic way. In this position paper, we present a model-based approach designed to help organizations gain full benefit from the traces they develop and to allow project stakeholders to plan, generate, and execute trace strategies in a graphical modeling environment. The approach includes a standard notation for capturing strategic traceability decisions in the form of a graph, and also notation for modeling reusable trace queries using augmented sequence diagrams. All of the model elements, including project specific data, are represented using XML. The approach is demonstrated through examples from a traffic simulator project composed of requirements, UML class diagrams, code, test cases, and test case results. 1

    A Model-Driven Approach for Business Process Management

    Get PDF
    The Business Process Management is a common mechanism recommended by a high number of standards for the management of companies and organizations. In software companies this practice is every day more accepted and companies have to assume it, if they want to be competitive. However, the effective definition of these processes and mainly their maintenance and execution are not always easy tasks. This paper presents an approach based on the Model-Driven paradigm for Business Process Management in software companies. This solution offers a suitable mechanism that was implemented successfully in different companies with a tool case named NDTQ-Framework.Ministerio de EducaciĂłn y Ciencia TIN2010-20057-C03-02Junta de AndalucĂ­a TIC-578

    Evaluating the Impact of Critical Factors in Agile Continuous Delivery Process: A System Dynamics Approach

    Get PDF
    Continuous Delivery is aimed at the frequent delivery of good quality software in a speedy, reliable and efficient fashion – with strong emphasis on automation and team collaboration. However, even with this new paradigm, repeatability of project outcome is still not guaranteed: project performance varies due to the various interacting and inter-related factors in the Continuous Delivery 'system'. This paper presents results from the investigation of various factors, in particular agile practices, on the quality of the developed software in the Continuous Delivery process. Results show that customer involvement and the cognitive ability of the QA have the most significant individual effects on the quality of software in continuous delivery
    • …
    corecore