14,644 research outputs found

    A Short Counterexample Property for Safety and Liveness Verification of Fault-tolerant Distributed Algorithms

    Full text link
    Distributed algorithms have many mission-critical applications ranging from embedded systems and replicated databases to cloud computing. Due to asynchronous communication, process faults, or network failures, these algorithms are difficult to design and verify. Many algorithms achieve fault tolerance by using threshold guards that, for instance, ensure that a process waits until it has received an acknowledgment from a majority of its peers. Consequently, domain-specific languages for fault-tolerant distributed systems offer language support for threshold guards. We introduce an automated method for model checking of safety and liveness of threshold-guarded distributed algorithms in systems where the number of processes and the fraction of faulty processes are parameters. Our method is based on a short counterexample property: if a distributed algorithm violates a temporal specification (in a fragment of LTL), then there is a counterexample whose length is bounded and independent of the parameters. We prove this property by (i) characterizing executions depending on the structure of the temporal formula, and (ii) using commutativity of transitions to accelerate and shorten executions. We extended the ByMC toolset (Byzantine Model Checker) with our technique, and verified liveness and safety of 10 prominent fault-tolerant distributed algorithms, most of which were out of reach for existing techniques.Comment: 16 pages, 11 pages appendi

    Towards a Formal Verification Methodology for Collective Robotic Systems

    Get PDF
    We introduce a UML-based notation for graphically modeling systems’ security aspects in a simple and intuitive way and a model-driven process that transforms graphical specifications of access control policies in XACML. These XACML policies are then translated in FACPL, a policy language with a formal semantics, and the resulting policies are evaluated by means of a Java-based software tool

    Radio Recombination Lines at Decametre Wavelengths: Prospects for the Future

    Full text link
    This paper considers the suitability of a number of emerging and future instruments for the study of radio recombination lines (RRLs) at frequencies below 200 MHz. These lines arise only in low-density regions of the ionized interstellar medium, and they may represent a frequency-dependent foreground for next-generation experiments trying to detect H I signals from the Epoch of Reionization and Dark Ages ("21-cm cosmology"). We summarize existing decametre-wavelength observations of RRLs, which have detected only carbon RRLs. We then show that, for an interferometric array, the primary instrumental factor limiting detection and study of the RRLs is the areal filling factor of the array. We consider the Long Wavelength Array (LWA-1), the LOw Frequency ARray (LOFAR), the low-frequency component of the Square Kilometre Array (SKA-lo), and a future Lunar Radio Array (LRA), all of which will operate at decametre wavelengths. These arrays offer digital signal processing, which should produce more stable and better defined spectral bandpasses; larger frequency tuning ranges; and better angular resolution than that of the previous generation of instruments that have been used in the past for RRL observations. Detecting Galactic carbon RRLs, with optical depths at the level of 10^-3, appears feasible for all of these arrays, with integration times of no more than 100 hr. The SKA-lo and LRA, and the LWA-1 and LOFAR at the lowest frequencies, should have a high enough filling factor to detect lines with much lower optical depths, of order 10^-4 in a few hundred hours. The amount of RRL-hosting gas present in the Galaxy at the high Galactic latitudes likely to be targeted in 21-cm cosmology studies is currently unknown. If present, however, the spectral fluctuations from RRLs could be comparable to or exceed the anticipated H I signals.Comment: 9 pages; Astron. & Astrophys., in pres

    Flexible and Robust Privacy-Preserving Implicit Authentication

    Full text link
    Implicit authentication consists of a server authenticating a user based on the user's usage profile, instead of/in addition to relying on something the user explicitly knows (passwords, private keys, etc.). While implicit authentication makes identity theft by third parties more difficult, it requires the server to learn and store the user's usage profile. Recently, the first privacy-preserving implicit authentication system was presented, in which the server does not learn the user's profile. It uses an ad hoc two-party computation protocol to compare the user's fresh sampled features against an encrypted stored user's profile. The protocol requires storing the usage profile and comparing against it using two different cryptosystems, one of them order-preserving; furthermore, features must be numerical. We present here a simpler protocol based on set intersection that has the advantages of: i) requiring only one cryptosystem; ii) not leaking the relative order of fresh feature samples; iii) being able to deal with any type of features (numerical or non-numerical). Keywords: Privacy-preserving implicit authentication, privacy-preserving set intersection, implicit authentication, active authentication, transparent authentication, risk mitigation, data brokers.Comment: IFIP SEC 2015-Intl. Information Security and Privacy Conference, May 26-28, 2015, IFIP AICT, Springer, to appea

    Equivalence of switching linear systems by bisimulation

    Get PDF
    A general notion of hybrid bisimulation is proposed for the class of switching linear systems. Connections between the notions of bisimulation-based equivalence, state-space equivalence, algebraic and input–output equivalence are investigated. An algebraic characterization of hybrid bisimulation and an algorithmic procedure converging in a finite number of steps to the maximal hybrid bisimulation are derived. Hybrid state space reduction is performed by hybrid bisimulation between the hybrid system and itself. By specializing the results obtained on bisimulation, also characterizations of simulation and abstraction are derived. Connections between observability, bisimulation-based reduction and simulation-based abstraction are studied.\ud \u

    Overcharging a Black Hole and Cosmic Censorship

    Get PDF
    We show that, contrary to a widespread belief, one can overcharge a near extremal Reissner-Nordstrom black hole by throwing in a charged particle, as long as the backreaction effects may be considered negligible. Furthermore, we find that we can make the particle's classical radius, mass, and charge, as well as the relative size of the backreaction terms arbitrarily small, by adjusting the parameters corresponding to the particle appropriately. This suggests that the question of cosmic censorship is still not wholly resolved even in this simple scenario. We contrast this with attempting to overcharge a black hole with a charged imploding shell, where we find that cosmic censorship is upheld. We also briefly comment on a number of possible extensions.Comment: 26 pages, 3 figures, LaTe

    Relation between the luminosity of young stellar objects and their circumstellar environment

    Get PDF
    We present a new model-independent method of comparison of NIR visibility data of YSOs. The method is based on scaling the measured baseline with the YSO's distance and luminosity, which removes the dependence of visibility on these two variables. We use this method to compare all available NIR visibility data and demonstrate that it distinguishes YSOs of luminosity >1000L_sun (low-L) from YSOs of <1000L_sun (high-L). This confirms earlier suggestions, based on fits of image models to the visibility data, for the difference between the NIR sizes of these two luminosity groups. When plotted against the ``scaled'' baseline, the visibility creates the following data clusters: low-L Herbig Ae/Be stars, T Tauri stars, and high-L Herbig Be stars. The T Tau cluster is similar to the low-L Herbig Ae/Be cluster, which has ~7 times smaller ``scaled'' baselines than the high-L Herbig Be cluster. We model the shape and size of clusters with different image models and find that low-L Herbig stars are the best explained by the uniform brightness ring and the halo model, T Tauri stars with the halo model, and high-L Herbig stars with the accretion disk model. However, the plausibility of each model is not well established. Therefore, we try to build a descriptive model of the circumstellar environment consistent with various observed properties of YSOs. We argue that low-L YSOs have optically thick disks with the optically thin inner dust sublimation cavity and an optically thin dusty outflow above the inner disk regions. High-L YSOs have optically thick accretion disks with high accretion rates enabling gas to dominate the NIR emission over dust. Although observations would favor such a description of YSOs, the required dust distribution is not supported by our current understanding of dust dynamics.Comment: 20 pages, 12 figures, Accepted for publication in the Astrophysical Journa

    A Logical Verification Methodology for Service-Oriented Computing

    Get PDF
    We introduce a logical verification methodology for checking behavioural properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed to express in an effective way distinctive aspects of services, such as, e.g., acceptance of a request, provision of a response, and correlation among service requests and responses. Our approach allows service properties to be expressed in such a way that they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulae over service specifications. We demonstrate feasibility and effectiveness of our methodology by means of the specification and the analysis of a case study in the automotive domain

    Surface Screening Charge and Effective Charge

    Full text link
    The charge on an atom at a metallic surface in an electric field is defined as the field-derivative of the force on the atom, and this is consistent with definitions of effective charge and screening charge. This charge can be found from the shift in the potential outside the surface when the atoms are moved. This is used to study forces and screening on surface atoms of Ag(001) c(2×2)(2\times 2) -- Xe as a function of external field. It is found that at low positive (outward) fields, the Xe with a negative effective charge of -0.093 e|{e}| is pushed into the surface. At a field of 2.3 V \AA1^{-1} the charge changes sign, and for fields greater than 4.1 V \AA1^{-1} the Xe experiences an outward force. Field desorption and the Eigler switch are discussed in terms of these results.Comment: 4 pages, 1 figure, RevTex (accepted by PRL

    The Evolution of Circumstellar Disks in Ophiuchus Binaries

    Get PDF
    Four Ophiuchus binaries, two Class I systems and two Class II systems, with separations of ~450-1100 AU, were observed with the Owens Valley Radio Observatory (OVRO) millimeter interferometer. In each system, the 3 mm continuum maps show dust emission at the location of the primary star, but no emission at the position of the secondary. This result is different from observations of less evolved Class 0 binaries, in which dust emission is detected from both sources. The nondetection of secondary disks is, however, similar to the dust distribution seen in wide Class II Taurus binaries. The combined OVRO results from the Ophiuchus and Taurus binaries suggest that secondary disk masses are significantly lower than primary disk masses by the Class II stage, with initial evidence that massive secondary disks are reduced by the Class I stage. Although some of the secondaries retain hot inner disk material, the early dissipation of massive outer disks may negatively impact planet formation around secondary stars. Masses for the circumprimary disks are within the range of masses measured for disks around single T Tauri stars and, in some cases, larger than the minimum mass solar nebula. More massive primary disks are predicted by several formation models and are broadly consistent with the observations. Combining the 3 mm data with previous 1.3 mm observations, the dust opacity power-law index for each primary disk is estimated. The opacity index values are all less than the scaling for interstellar dust, possibly indicating grain growth within the circumprimary disks
    corecore