62,849 research outputs found

    VIRTUE : integrating CFD ship design

    Get PDF
    Novel ship concepts, increasing size and speed, and strong competition in the global maritime market require that a ship's hydrodynamic performance be studied at the highest level of sophistication. All hydrodynamic aspects need to be considered so as to optimize trade-offs between resistance, propulsion (and cavitation), seakeeping or manoeuvring. VIRTUE takes a holistic approach to hydrodynamic design and focuses on integrating advanced CFD tools in a software platform that can control and launch multi-objective hydrodynamic design projects. In this paper current practice, future requirements and a potential software integration platform are presented. The necessity of parametric modelling as a means of effectively generating and efficiently varying geometry, and the added-value of advanced visualization, is discussed. An illustrating example is given as a test case, a container carrier investigation, and the requirements and a proposed architecture for the platform are outlined

    IMITATOR II: A Tool for Solving the Good Parameters Problem in Timed Automata

    Full text link
    We present here Imitator II, a new version of Imitator, a tool implementing the "inverse method" for parametric timed automata: given a reference valuation of the parameters, it synthesizes a constraint such that, for any valuation satisfying this constraint, the system behaves the same as under the reference valuation in terms of traces, i.e., alternating sequences of locations and actions. Imitator II also implements the "behavioral cartography algorithm", allowing us to solve the following good parameters problem: find a set of valuations within a given bounded parametric domain for which the system behaves well. We present new features and optimizations of the tool, and give results of applications to various examples of asynchronous circuits and communication protocols.Comment: In Proceedings INFINITY 2010, arXiv:1010.611

    Energy efficiency parametric design tool in the framework of holistic ship design optimization

    Get PDF
    Recent International Maritime Organization (IMO) decisions with respect to measures to reduce the emissions from maritime greenhouse gases (GHGs) suggest that the collaboration of all major stakeholders of shipbuilding and ship operations is required to address this complex techno-economical and highly political problem efficiently. This calls eventually for the development of proper design, operational knowledge, and assessment tools for the energy-efficient design and operation of ships, as suggested by the Second IMO GHG Study (2009). This type of coordination of the efforts of many maritime stakeholders, with often conflicting professional interests but ultimately commonly aiming at optimal ship design and operation solutions, has been addressed within a methodology developed in the EU-funded Logistics-Based (LOGBASED) Design Project (2004–2007). Based on the knowledge base developed within this project, a new parametric design software tool (PDT) has been developed by the National Technical University of Athens, Ship Design Laboratory (NTUA-SDL), for implementing an energy efficiency design and management procedure. The PDT is an integral part of an earlier developed holistic ship design optimization approach by NTUA-SDL that addresses the multi-objective ship design optimization problem. It provides Pareto-optimum solutions and a complete mapping of the design space in a comprehensive way for the final assessment and decision by all the involved stakeholders. The application of the tool to the design of a large oil tanker and alternatively to container ships is elaborated in the presented paper

    Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing

    Full text link
    Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation

    Reachability in Parametric Interval Markov Chains using Constraints

    Full text link
    Parametric Interval Markov Chains (pIMCs) are a specification formalism that extend Markov Chains (MCs) and Interval Markov Chains (IMCs) by taking into account imprecision in the transition probability values: transitions in pIMCs are labeled with parametric intervals of probabilities. In this work, we study the difference between pIMCs and other Markov Chain abstractions models and investigate the two usual semantics for IMCs: once-and-for-all and at-every-step. In particular, we prove that both semantics agree on the maximal/minimal reachability probabilities of a given IMC. We then investigate solutions to several parameter synthesis problems in the context of pIMCs -- consistency, qualitative reachability and quantitative reachability -- that rely on constraint encodings. Finally, we propose a prototype implementation of our constraint encodings with promising results

    Evaluating probabilistic forecasts with scoringRules

    Get PDF
    Probabilistic forecasts in the form of probability distributions over future events have become popular in several fields including meteorology, hydrology, economics, and demography. In typical applications, many alternative statistical models and data sources can be used to produce probabilistic forecasts. Hence, evaluating and selecting among competing methods is an important task. The scoringRules package for R provides functionality for comparative evaluation of probabilistic models based on proper scoring rules, covering a wide range of situations in applied work. This paper discusses implementation and usage details, presents case studies from meteorology and economics, and points to the relevant background literature

    On Sound Relative Error Bounds for Floating-Point Arithmetic

    Full text link
    State-of-the-art static analysis tools for verifying finite-precision code compute worst-case absolute error bounds on numerical errors. These are, however, often not a good estimate of accuracy as they do not take into account the magnitude of the computed values. Relative errors, which compute errors relative to the value's magnitude, are thus preferable. While today's tools do report relative error bounds, these are merely computed via absolute errors and thus not necessarily tight or more informative. Furthermore, whenever the computed value is close to zero on part of the domain, the tools do not report any relative error estimate at all. Surprisingly, the quality of relative error bounds computed by today's tools has not been systematically studied or reported to date. In this paper, we investigate how state-of-the-art static techniques for computing sound absolute error bounds can be used, extended and combined for the computation of relative errors. Our experiments on a standard benchmark set show that computing relative errors directly, as opposed to via absolute errors, is often beneficial and can provide error estimates up to six orders of magnitude tighter, i.e. more accurate. We also show that interval subdivision, another commonly used technique to reduce over-approximations, has less benefit when computing relative errors directly, but it can help to alleviate the effects of the inherent issue of relative error estimates close to zero

    Functional programming languages for verification tools: experiences with ML and Haskell

    Get PDF
    We compare Haskell with ML as programming languages for verification tools, based on our experience developing TRUTH in Haskell and the Edinburgh Concurrency Workbench (CWB) in ML. We discuss not only technical language features but also the "worlds" of the languages, for example, the availability of tools and libraries
    corecore