299 research outputs found

    Designing Secure Ethereum Smart Contracts: A Finite State Machine Based Approach

    Full text link
    The adoption of blockchain-based distributed computation platforms is growing fast. Some of these platforms, such as Ethereum, provide support for implementing smart contracts, which are envisioned to have novel applications in a broad range of areas, including finance and Internet-of-Things. However, a significant number of smart contracts deployed in practice suffer from security vulnerabilities, which enable malicious users to steal assets from a contract or to cause damage. Vulnerabilities present a serious issue since contracts may handle financial assets of considerable value, and contract bugs are non-fixable by design. To help developers create more secure smart contracts, we introduce FSolidM, a framework rooted in rigorous semantics for designing con- tracts as Finite State Machines (FSM). We present a tool for creating FSM on an easy-to-use graphical interface and for automatically generating Ethereum contracts. Further, we introduce a set of design patterns, which we implement as plugins that developers can easily add to their contracts to enhance security and functionality

    New-Physics Effects on Triple-Product Correlations in Lambda_b Decays

    Full text link
    We adopt an effective-lagrangian approach to compute the new-physics contributions to T-violating triple-product correlations in charmless Lambda_b decays. We use factorization and work to leading order in the heavy-quark expansion. We find that the standard-model (SM) predictions for such correlations can be significantly modified. For example, triple products which are expected to vanish in the SM can be enormous (~50%) in the presence of new physics. By measuring triple products in a variety of Lambda_b decays, one can diagnose which new-physics operators are or are not present. Our general results can be applied to any specific model of new physics by simply calculating which operators appear in that model.Comment: 20 pages, LaTeX, no figures. Added a paragraph (+ references) discussing nonfactorizable effects. Conclusions unchange

    D-Finder 2: Towards Efficient Correctness of Incremental Design

    Full text link

    Nucleation and Growth of the Zn-Fe Alloy from a Chloride Electrolyte

    Get PDF
    In this study, the kinetics of Zn-Fe codeposition was investigated in chloride acidic solution using cyclic voltammetry. Anomalous codeposition is detected and this mechanism depends on the Zn(II) / Fe(II) concentration ratio in the electrolytic bath. The study of early stages of electrodeposition showed that Zn- Fe follows a theoretical response to instantaneous nucleation evolves into a progressive nucleation according to the model of Scharifker and Hills. The morphology and structure of the coatings is discussed using characterization techniques. Dense, uniform, and singlephased Zn-Fe coatings could be obtained with a Zn-Fe ratio of 1/3. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/3531

    A theory of normed simulations

    Get PDF
    In existing simulation proof techniques, a single step in a lower-level specification may be simulated by an extended execution fragment in a higher-level one. As a result, it is cumbersome to mechanize these techniques using general purpose theorem provers. Moreover, it is undecidable whether a given relation is a simulation, even if tautology checking is decidable for the underlying specification logic. This paper introduces various types of normed simulations. In a normed simulation, each step in a lower-level specification can be simulated by at most one step in the higher-level one, for any related pair of states. In earlier work we demonstrated that normed simulations are quite useful as a vehicle for the formalization of refinement proofs via theorem provers. Here we show that normed simulations also have pleasant theoretical properties: (1) under some reasonable assumptions, it is decidable whether a given relation is a normed forward simulation, provided tautology checking is decidable for the underlying logic; (2) at the semantic level, normed forward and backward simulations together form a complete proof method for establishing behavior inclusion, provided that the higher-level specification has finite invisible nondeterminism.Comment: 31 pages, 10figure

    An Abstract Framework for Deadlock Prevention in BIP

    Get PDF
    Part 6: Session 5: Model CheckingInternational audienceWe present a sound but incomplete criterion for checking deadlock freedom of finite state systems expressed in BIP: a component-based framework for the construction of complex distributed systems. Since deciding deadlock-freedom for finite-state concurrent systems is PSPACE-complete, our criterion gives up completeness in return for tractability of evaluation. Our criterion can be evaluated by model-checking subsystems of the overall large system. The size of these subsystems depends only on the local topology of direct interaction between components, and not on the number of components in the overall system. We present two experiments, in which our method compares favorably with existing approaches. For example, in verifying deadlock freedom of dining philosphers, our method shows linear increase in computation time with the number of philosophers, whereas other methods (even those that use abstraction) show super-linear increase, due to state-explosion

    Rigorous System Design: The BIP Approach

    Get PDF
    Rigorous system design requires the use of a single powerful component framework allowing the representation of the designed system at different levels of detail, from application software to its implementation. This is essential for ensuring the overall coherency and correctness. The paper introduces a rigorous design flow based on the BIP (Behavior, Interaction, Priority) component framework. This design flow relies on several, tool-supported, source-to-source transformations allowing to progressively and correctly transform high level application software towards efficient implementations for specific platforms

    Optical performance of the JWST MIRI flight model: characterization of the point spread function at high-resolution

    Get PDF
    The Mid Infra Red Instrument (MIRI) is one of the four instruments onboard the James Webb Space Telescope (JWST), providing imaging, coronagraphy and spectroscopy over the 5-28 microns band. To verify the optical performance of the instrument, extensive tests were performed at CEA on the flight model (FM) of the Mid-InfraRed IMager (MIRIM) at cryogenic temperatures and in the infrared. This paper reports on the point spread function (PSF) measurements at 5.6 microns, the shortest operating wavelength for imaging. At 5.6 microns the PSF is not Nyquist-sampled, so we use am original technique that combines a microscanning measurement strategy with a deconvolution algorithm to obtain an over-resolved MIRIM PSF. The microscanning consists in a sub-pixel scan of a point source on the focal plane. A data inversion method is used to reconstruct PSF images that are over-resolved by a factor of 7 compared to the native resolution of MIRI. We show that the FWHM of the high-resolution PSFs were 5-10% wider than that obtained with Zemax simulations. The main cause was identified as an out-of-specification tilt of the M4 mirror. After correction, two additional test campaigns were carried out, and we show that the shape of the PSF is conform to expectations. The FWHM of the PSFs are 0.18-0.20 arcsec, in agreement with simulations. 56.1-59.2% of the total encircled energy (normalized to a 5 arcsec radius) is contained within the first dark Airy ring, over the whole field of view. At longer wavelengths (7.7-25.5 microns), this percentage is 57-68%. MIRIM is thus compliant with the optical quality requirements. This characterization of the MIRIM PSF, as well as the deconvolution method presented here, are of particular importance, not only for the verification of the optical quality and the MIRI calibration, but also for scientific applications.Comment: 13 pages, submitted to SPIE Proceedings vol. 7731, Space Telescopes and Instrumentation 2010: Optical, Infrared, and Millimeter Wav

    Coordination of Dynamic Software Components with JavaBIP

    Get PDF
    JavaBIP allows the coordination of software components by clearly separating the functional and coordination aspects of the system behavior. JavaBIP implements the principles of the BIP component framework rooted in rigorous operational semantics. Recent work both on BIP and JavaBIP allows the coordination of static components defined prior to system deployment, i.e., the architecture of the coordinated system is fixed in terms of its component instances. Nevertheless, modern systems, often make use of components that can register and deregister dynamically during system execution. In this paper, we present an extension of JavaBIP that can handle this type of dynamicity. We use first-order interaction logic to define synchronization constraints based on component types. Additionally, we use directed graphs with edge coloring to model dependencies among components that determine the validity of an online system. We present the software architecture of our implementation, provide and discuss performance evaluation results.Comment: Technical report that accompanies the paper accepted at the 14th International Conference on Formal Aspects of Component Softwar

    A Theory Agenda for Component-Based Design

    Get PDF
    The aim of the paper is to present a theory agenda for component-based design based on results that motivated the development of the BIP component framework, to identify open problems and discuss further research directions. The focus is on proposing a semantically sound theoretical and general framework for modelling component-based systems and their properties both behavioural and architectural as well for achieving correctness by using scalable specific techniques. We discuss the problem of composing components by proposing the concept of glue as a set of stateless composition operators defined by a certain type of operational semantics rules. We provide an overview of results about glue expressiveness and minimality. We show how interactions and associated transfer of data can be described by using connectors and in particular, how dynamic connectors can be defined as an extension of static connectors. We present two approaches for achieving correctness for component-based systems. One is by compositional inference of global properties of a composite component from properties of its constituents and interaction constraints implied by composition operators. The other is by using and composing architectures that enforce specific coordination properties. Finally, we discuss recent results on architecture specification by studying two types of logics: 1) interaction logics for the specification of sets of allowed interactions; 2) configuration logics for the characterisation of architecture styles
    corecore