1,945 research outputs found

    Rate-Based Transition Systems for Stochastic Process Calculi

    Get PDF
    A variant of Rate Transition Systems (RTS), proposed by Klin and Sassone, is introduced and used as the basic model for defining stochastic behaviour of processes. The transition relation used in our variant associates to each process, for each action, the set of possible futures paired with a measure indicating their rates. We show how RTS can be used for providing the operational semantics of stochastic extensions of classical formalisms, namely CSP and CCS. We also show that our semantics for stochastic CCS guarantees associativity of parallel composition. Similarly, in contrast with the original definition by Priami, we argue that a semantics for stochastic π-calculus can be provided that guarantees associativity of parallel composition

    Recovering Grammar Relationships for the Java Language Specification

    Get PDF
    Grammar convergence is a method that helps discovering relationships between different grammars of the same language or different language versions. The key element of the method is the operational, transformation-based representation of those relationships. Given input grammars for convergence, they are transformed until they are structurally equal. The transformations are composed from primitive operators; properties of these operators and the composed chains provide quantitative and qualitative insight into the relationships between the grammars at hand. We describe a refined method for grammar convergence, and we use it in a major study, where we recover the relationships between all the grammars that occur in the different versions of the Java Language Specification (JLS). The relationships are represented as grammar transformation chains that capture all accidental or intended differences between the JLS grammars. This method is mechanized and driven by nominal and structural differences between pairs of grammars that are subject to asymmetric, binary convergence steps. We present the underlying operator suite for grammar transformation in detail, and we illustrate the suite with many examples of transformations on the JLS grammars. We also describe the extraction effort, which was needed to make the JLS grammars amenable to automated processing. We include substantial metadata about the convergence process for the JLS so that the effort becomes reproducible and transparent

    Certainly Unsupervisable States

    Get PDF
    This paper proposes an abstraction method for compositional synthesis. Synthesis is a method to automatically compute a control program or supervisor that restricts the behaviour of a given system to ensure safety and liveness. Compositional synthesis uses repeated abstraction and simplification to combat the state-space explosion problem for large systems. The abstraction method proposed in this paper finds and removes the so-called certainly unsupervisable states. By removing these states at an early stage, the final state space can be reduced substantially. The paper describes an algorithm with cubic time complexity to compute the largest possible set of removable states. A practical example demonstrates the feasibility of the method to solve real-world problems

    SOAP: Efficient Feature Selection of Numeric Attributes

    Get PDF
    The attribute selection techniques for supervised learning, used in the preprocessing phase to emphasize the most relevant attributes, allow making models of classification simpler and easy to understand. Depending on the method to apply: starting point, search organization, evaluation strategy, and the stopping criterion, there is an added cost to the classification algorithm that we are going to use, that normally will be compensated, in greater or smaller extent, by the attribute reduction in the classification model. The algorithm (SOAP: Selection of Attributes by Projection) has some interesting characteristics: lower computational cost (O(mn log n) m attributes and n examples in the data set) with respect to other typical algorithms due to the absence of distance and statistical calculations; with no need for transformation. The performance of SOAP is analysed in two ways: percentage of reduction and classification. SOAP has been compared to CFS [6] and ReliefF [11]. The results are generated by C4.5 and 1NN before and after the application of the algorithms

    Global Optimization by Basin-Hopping and the Lowest Energy Structures of Lennard-Jones Clusters Containing up to 110 Atoms

    Full text link
    We describe a global optimization technique using `basin-hopping' in which the potential energy surface is transformed into a collection of interpenetrating staircases. This method has been designed to exploit the features which recent work suggests must be present in an energy landscape for efficient relaxation to the global minimum. The transformation associates any point in configuration space with the local minimum obtained by a geometry optimization started from that point, effectively removing transition state regions from the problem. However, unlike other methods based upon hypersurface deformation, this transformation does not change the global minimum. The lowest known structures are located for all Lennard-Jones clusters up to 110 atoms, including a number that have never been found before in unbiased searches.Comment: 8 pages, 3 figures, revte

    Theoretical study of finite temperature spectroscopy in van der Waals clusters. I. Probing phase changes in CaAr_n

    Full text link
    The photoabsorption spectra of calcium-doped argon clusters CaAr_n are investigated at thermal equilibrium using a variety of theoretical and numerical tools. The influence of temperature on the absorption spectra is estimated using the quantum superposition method for a variety of cluster sizes in the range 6<=n<=146. At the harmonic level of approximation, the absorption intensity is calculated through an extension of the Gaussian theory by Wadi and Pollak [J. Chem. Phys. vol 110, 11890 (1999)]. This theory is tested on simple, few-atom systems in both the classical and quantum regimes for which highly accurate Monte Carlo data can be obtained. By incorporating quantum anharmonic corrections to the partition functions and respective weights of the isomers, we show that the superposition method can correctly describe the finite-temperature spectroscopic properties of CaAr_n systems. The use of the absorption spectrum as a possible probe of isomerization or phase changes in the argon cluster is discussed at the light of finite-size effects.Comment: 17 pages, 9 figure

    The Fine Structure Lines of Hydrogen in HII Regions

    Full text link
    The 2s_{1/2} state of hydrogen is metastable and overpopulated in HII regions. In addition, the 2p states may be pumped by ambient Lyman-alpha radiation. Fine structure transitions between these states may be observable in HII regions at 1.1 GHz (2s_{1/2}-2p_{1/2}) and/or 9.9 GHz (2s_{1/2}-2p_{3/2}), although the details of absorption versus emission are determined by the relative populations of the 2s and 2p states. The n=2 level populations are solved with a parameterization that allows for Lyman-alpha pumping of the 2p states. The density of Lyman-alpha photons is set by their creation rate, easily determined from the recombination rate, and their removal rate. Here we suggest that the dominant removal mechanism of Lyman-alpha radiation in HII regions is absorption by dust. This circumvents the need to solve the Lyman-alpha transfer problem, and provides an upper limit to the rate at which the 2p states are populated by Lyman-alpha photons. In virtually all cases of interest, the 2p states are predominantly populated by recombination, rather than Lyman-alpha pumping. We then solve the radiative transfer problem for the fine structure lines in the presence of free-free radiation. In the likely absence of Lyman-alpha pumping, the 2s_{1/2}-2p_{1/2} lines will appear in stimulated emission and the 2s_{1/2}-2p_{3/2} lines in absorption. Searching for the 9.9 GHz lines in high emission measure HII regions offers the best prospects for detection. The lines are predicted to be weak; in the best cases, line-to-continuum ratios of several tenths of a percent might be expected with line strengths of tens to a hundred mK with the Green Bank Telescope.Comment: 18 pages, 2 figures, accepted by ApJ, references added, typos correcte

    Classification of time series by shapelet transformation

    Get PDF
    Time-series classification (TSC) problems present a specific challenge for classification algorithms: how to measure similarity between series. A \emph{shapelet} is a time-series subsequence that allows for TSC based on local, phase-independent similarity in shape. Shapelet-based classification uses the similarity between a shapelet and a series as a discriminatory feature. One benefit of the shapelet approach is that shapelets are comprehensible, and can offer insight into the problem domain. The original shapelet-based classifier embeds the shapelet-discovery algorithm in a decision tree, and uses information gain to assess the quality of candidates, finding a new shapelet at each node of the tree through an enumerative search. Subsequent research has focused mainly on techniques to speed up the search. We examine how best to use the shapelet primitive to construct classifiers. We propose a single-scan shapelet algorithm that finds the best kk shapelets, which are used to produce a transformed dataset, where each of the kk features represent the distance between a time series and a shapelet. The primary advantages over the embedded approach are that the transformed data can be used in conjunction with any classifier, and that there is no recursive search for shapelets. We demonstrate that the transformed data, in conjunction with more complex classifiers, gives greater accuracy than the embedded shapelet tree. We also evaluate three similarity measures that produce equivalent results to information gain in less time. Finally, we show that by conducting post-transform clustering of shapelets, we can enhance the interpretability of the transformed data. We conduct our experiments on 29 datasets: 17 from the UCR repository, and 12 we provide ourselve

    The Role of Deontic Logic in the Specification of Information Systems

    Get PDF
    In this paper we discuss the role that deontic logic plays in the specification of information systems, either because constraints on the systems directly concern norms or, and even more importantly, system constraints are considered ideal but violable (so-called `softÂż constraints).\ud To overcome the traditional problems with deontic logic (the so-called paradoxes), we first state the importance of distinguishing between ought-to-be and ought-to-do constraints and next focus on the most severe paradox, the so-called Chisholm paradox, involving contrary-to-duty norms. We present a multi-modal extension of standard deontic logic (SDL) to represent the ought-to-be version of the Chisholm set properly. For the ought-to-do variant we employ a reduction to dynamic logic, and show how the Chisholm set can be treated adequately in this setting. Finally we discuss a way of integrating both ought-to-be and ought-to-do reasoning, enabling one to draw conclusions from ought-to-be constraints to ought-to-do ones, and show by an example the use(fulness) of this
    • 

    corecore