11,077 research outputs found

    A Model-Derivation Framework for Software Analysis

    Full text link
    Model-based verification allows to express behavioral correctness conditions like the validity of execution states, boundaries of variables or timing at a high level of abstraction and affirm that they are satisfied by a software system. However, this requires expressive models which are difficult and cumbersome to create and maintain by hand. This paper presents a framework that automatically derives behavioral models from real-sized Java programs. Our framework builds on the EMF/ECore technology and provides a tool that creates an initial model from Java bytecode, as well as a series of transformations that simplify the model and eventually output a timed-automata model that can be processed by a model checker such as UPPAAL. The framework has the following properties: (1) consistency of models with software, (2) extensibility of the model derivation process, (3) scalability and (4) expressiveness of models. We report several case studies to validate how our framework satisfies these properties.Comment: In Proceedings MARS 2017, arXiv:1703.0581

    A Model-Derivation Framework for Software Analysis

    Get PDF
    Model-based verification allows to express behavioral correctness conditions like the validity of execution states, boundaries of variables or timing at a high level of abstraction and affirm that they are satisfied by a software system. However, this requires expressive models which are difficult and cumbersome to create and maintain by hand. This paper presents a framework that automatically derives behavioral models from real-sized Java programs. Our framework builds on the EMF/ECore technology and provides a tool that creates an initial model from Java bytecode, as well as a series of transformations that simplify the model and eventually output a timed-automata model that can be processed by a model checker such as UPPAAL. The framework has the following properties: (1) consistency of models with software, (2) extensibility of the model derivation process, (3) scalability and (4) expressiveness of models. We report several case studies to validate how our framework satisfies these properties.Comment: In Proceedings MARS 2017, arXiv:1703.0581

    The application of Bayesian change point detection in UAV fuel systems

    Get PDF
    AbstractA significant amount of research has been undertaken in statistics to develop and implement various change point detection techniques for different industrial applications. One of the successful change point detection techniques is Bayesian approach because of its strength to cope with uncertainties in the recorded data. The Bayesian Change Point (BCP) detection technique has the ability to overcome the uncertainty in estimating the number and location of change point due to its probabilistic theory. In this paper we implement the BCP detection technique to a laboratory based fuel rig system to detect the change in the pre-valve pressure signal due to a failure in the valve. The laboratory test-bed represents a Unmanned Aerial Vehicle (UAV) fuel system and its associated electrical power supply, control system and sensing capabilities. It is specifically designed in order to replicate a number of component degradation faults with high accuracy and repeatability so that it can produce benchmark datasets to demonstrate and assess the efficiency of the BCP algorithm. Simulation shows satisfactory results of implementing the proposed BCP approach. However, the computational complexity, and the high sensitivity due to the prior distribution on the number and location of the change points are the main disadvantages of the BCP approac

    Fractal geometry of nature (bone) may inspire medical devices shape

    Get PDF
    Medical devices, as orthopaedics prostheses and dental implants, have been designed over years on the strength of mechanical, clinical and biological indications. This sequence is the commonly accepted cognitive and research process: adapting the device to the surrounding environment (host tissue). Inverting this traditional logical approach, we started from bone microarchitecture analysis. Here we show that a unique geometric rule seems to underlie different morphologic and functional aspects of human jaw bone tissue: fractal properties of white trabeculae in low quality bone are similar to fractal properties of black spaces in high quality bone and vice versa. These data inspired the fractal bone quality classification and they were the starting point for reverse engineering to design specific dental implants threads. We introduce a new philosophy: bone decoding and with these data devices encoding. In the future, the method will be implemented for the analysis of other human or animal tissues in order to project medical devices and biomaterials with a microarchitecture driven by nature

    Computation of the Marcum Q-function

    Get PDF
    Methods and an algorithm for computing the generalized Marcum QQ-function (Qμ(x,y)Q_{\mu}(x,y)) and the complementary function (Pμ(x,y)P_{\mu}(x,y)) are described. These functions appear in problems of different technical and scientific areas such as, for example, radar detection and communications, statistics and probability theory, where they are called the non-central chi-square or the non central gamma cumulative distribution functions. The algorithm for computing the Marcum functions combines different methods of evaluation in different regions: series expansions, integral representations, asymptotic expansions, and use of three-term homogeneous recurrence relations. A relative accuracy close to 101210^{-12} can be obtained in the parameter region (x,y,μ)[0,A]×[0,A]×[1,A](x,y,\mu) \in [0,\,A]\times [0,\,A]\times [1,\,A], A=200A=200, while for larger parameters the accuracy decreases (close to 101110^{-11} for A=1000A=1000 and close to 5×10115\times 10^{-11} for A=10000A=10000).Comment: Accepted for publication in ACM Trans. Math. Soft

    Empirical Evaluation of Test Coverage for Functional Programs

    Get PDF
    The correlation between test coverage and test effectiveness is important to justify the use of coverage in practice. Existing results on imperative programs mostly show that test coverage predicates effectiveness. However, since functional programs are usually structurally different from imperative ones, it is unclear whether the same result may be derived and coverage can be used as a prediction of effectiveness on functional programs. In this paper we report the first empirical study on the correlation between test coverage and test effectiveness on functional programs. We consider four types of coverage: as input coverages, statement/branch coverage and expression coverage, and as oracle coverages, count of assertions and checked coverage. We also consider two types of effectiveness: raw effectiveness and normalized effectiveness. Our results are twofold. (1) In general the findings on imperative programs still hold on functional programs, warranting the use of coverage in practice. (2) On specific coverage criteria, the results may be unexpected or different from the imperative ones, calling for further studies on functional programs

    Information technology of developing test kits based on software requirements

    Get PDF
    The article presents an advanced information technology of developing test kits based on software requirements using regulated cascading decision charts, providing the increase of coverage completeness by the projected test kits of software requirements and the accuracy of the tests themselves. The article presents an advanced information technology of developing test kits based on software requirements using regulated cascading decision charts, providing the increase of coverage completeness by the projected test kits of software requirements and the accuracy of the tests themselves

    An exponential lower bound for Individualization-Refinement algorithms for Graph Isomorphism

    Full text link
    The individualization-refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question whether individualization-refinement algorithms can achieve upper bounds on the running time similar to the more theoretical techniques based on a group theoretic approach. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualization-refinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualization-refinement framework, our construction is immune to changing the cell selector, or adding various heuristic invariants to the algorithm. Furthermore, our graphs also provide exponential lower bounds in the case when the kk-dimensional Weisfeiler-Leman algorithm is used to replace the standard color refinement operator and the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm.Comment: 21 page

    Fast and Precise Symbolic Analysis of Concurrency Bugs in Device Drivers

    Get PDF
    © 2015 IEEE.Concurrency errors, such as data races, make device drivers notoriously hard to develop and debug without automated tool support. We present Whoop, a new automated approach that statically analyzes drivers for data races. Whoop is empowered by symbolic pairwise lockset analysis, a novel analysis that can soundly detect all potential races in a driver. Our analysis avoids reasoning about thread interleavings and thus scales well. Exploiting the race-freedom guarantees provided by Whoop, we achieve a sound partial-order reduction that significantly accelerates Corral, an industrial-strength bug-finder for concurrent programs. Using the combination of Whoop and Corral, we analyzed 16 drivers from the Linux 4.0 kernel, achieving 1.5 - 20× speedups over standalone Corral
    corecore