3,824 research outputs found

    Service composition in stochastic settings

    Get PDF
    With the growth of the Internet-of-Things and online Web services, more services with more capabilities are available to us. The ability to generate new, more useful services from existing ones has been the focus of much research for over a decade. The goal is, given a specification of the behavior of the target service, to build a controller, known as an orchestrator, that uses existing services to satisfy the requirements of the target service. The model of services and requirements used in most work is that of a finite state machine. This implies that the specification can either be satisfied or not, with no middle ground. This is a major drawback, since often an exact solution cannot be obtained. In this paper we study a simple stochastic model for service composition: we annotate the tar- get service with probabilities describing the likelihood of requesting each action in a state, and rewards for being able to execute actions. We show how to solve the resulting problem by solving a certain Markov Decision Process (MDP) derived from the service and requirement specifications. The solution to this MDP induces an orchestrator that coincides with the exact solution if a composition exists. Otherwise it provides an approximate solution that maximizes the expected sum of values of user requests that can be serviced. The model studied although simple shades light on composition in stochastic settings and indeed we discuss several possible extensions

    Ordered Level Planarity, Geodesic Planarity and Bi-Monotonicity

    Full text link
    We introduce and study the problem Ordered Level Planarity which asks for a planar drawing of a graph such that vertices are placed at prescribed positions in the plane and such that every edge is realized as a y-monotone curve. This can be interpreted as a variant of Level Planarity in which the vertices on each level appear in a prescribed total order. We establish a complexity dichotomy with respect to both the maximum degree and the level-width, that is, the maximum number of vertices that share a level. Our study of Ordered Level Planarity is motivated by connections to several other graph drawing problems. Geodesic Planarity asks for a planar drawing of a graph such that vertices are placed at prescribed positions in the plane and such that every edge is realized as a polygonal path composed of line segments with two adjacent directions from a given set SS of directions symmetric with respect to the origin. Our results on Ordered Level Planarity imply NPNP-hardness for any SS with S4|S|\ge 4 even if the given graph is a matching. Katz, Krug, Rutter and Wolff claimed that for matchings Manhattan Geodesic Planarity, the case where SS contains precisely the horizontal and vertical directions, can be solved in polynomial time [GD'09]. Our results imply that this is incorrect unless P=NPP=NP. Our reduction extends to settle the complexity of the Bi-Monotonicity problem, which was proposed by Fulek, Pelsmajer, Schaefer and \v{S}tefankovi\v{c}. Ordered Level Planarity turns out to be a special case of T-Level Planarity, Clustered Level Planarity and Constrained Level Planarity. Thus, our results strengthen previous hardness results. In particular, our reduction to Clustered Level Planarity generates instances with only two non-trivial clusters. This answers a question posed by Angelini, Da Lozzo, Di Battista, Frati and Roselli.Comment: Appears in the Proceedings of the 25th International Symposium on Graph Drawing and Network Visualization (GD 2017

    A critical comparison of different definitions of topological charge on the lattice

    Full text link
    A detailed comparison is made between the field-theoretic and geometric definitions of topological charge density on the lattice. Their renormalizations with respect to continuum are analysed. The definition of the topological susceptibility, as used in chiral Ward identities, is reviewed. After performing the subtractions required by it, the different lattice methods yield results in agreement with each other. The methods based on cooling and on counting fermionic zero modes are also discussed.Comment: 12 pages (LaTeX file) + 7 (postscript) figures. Revised version. Submitted to Phys. Rev.

    Dense coding with multipartite quantum states

    Full text link
    We consider generalisations of the dense coding protocol with an arbitrary number of senders and either one or two receivers, sharing a multiparty quantum state, and using a noiseless channel. For the case of a single receiver, the capacity of such information transfer is found exactly. It is shown that the capacity is not enhanced by allowing the senders to perform joint operations. We provide a nontrivial upper bound on the capacity in the case of two receivers. We also give a classification of the set of all multiparty states in terms of their usefulness for dense coding. We provide examples for each of these classes, and discuss some of their properties.Comment: 14 pages, 1 figure, RevTeX

    Financial instability from local market measures

    Full text link
    We study the emergence of instabilities in a stylized model of a financial market, when different market actors calculate prices according to different (local) market measures. We derive typical properties for ensembles of large random markets using techniques borrowed from statistical mechanics of disordered systems. We show that, depending on the number of financial instruments available and on the heterogeneity of local measures, the market moves from an arbitrage-free phase to an unstable one, where the complexity of the market - as measured by the diversity of financial instruments - increases, and arbitrage opportunities arise. A sharp transition separates the two phases. Focusing on two different classes of local measures inspired by real markets strategies, we are able to analytically compute the critical lines, corroborating our findings with numerical simulations.Comment: 17 pages, 4 figure

    Dual superconductivity in the SU(2) pure gauge vacuum: a lattice study

    Get PDF
    We investigate the dual superconductivity hypothesis in pure SU(2) lattice gauge theory. We focus on the dual Meissner effect by analyzing the distribution of the color fields due to a static quark-antiquark pair. We find evidence of the dual Meissner effect both in the maximally Abelian gauge and without gauge fixing. We measure the London penetration length. Our results suggest that the London penetration length is a physical gauge-invariant quantity. We put out a simple relation between the penetration length and the square root of the string tension. We find that our extimation is quite close to the extrapolated continuum limit available in the literature. A remarkable consequence of our study is that an effective Abelian theory can account for the long range properties of the SU(2) confining vacuum.Comment: 38 pages, uuencoded compressed (using GNU's gzip) tar file containing 1 LaTeX2e file (to be processed 3 times) + 16 encapsulated Postscript figures. A full Postscript version of this paper is available at http://www.ba.infn.it/disk$gruppo_4/cosmai/www/papers/195-95.P

    Continuous variable cloning via network of parametric gates

    Full text link
    We propose an experimental scheme for the cloning machine of continuous quantum variables through a network of parametric amplifiers working as input-output four-port gates.Comment: 4 pages, 2 figures. To appear on Phys. Rev. Let

    A Declarative Framework for Specifying and Enforcing Purpose-aware Policies

    Full text link
    Purpose is crucial for privacy protection as it makes users confident that their personal data are processed as intended. Available proposals for the specification and enforcement of purpose-aware policies are unsatisfactory for their ambiguous semantics of purposes and/or lack of support to the run-time enforcement of policies. In this paper, we propose a declarative framework based on a first-order temporal logic that allows us to give a precise semantics to purpose-aware policies and to reuse algorithms for the design of a run-time monitor enforcing purpose-aware policies. We also show the complexity of the generation and use of the monitor which, to the best of our knowledge, is the first such a result in literature on purpose-aware policies.Comment: Extended version of the paper accepted at the 11th International Workshop on Security and Trust Management (STM 2015

    Effect of the surface chemical composition and of added metal cation concentration on the stability of metal nanoparticles synthesized by pulsed laser ablation in water

    Get PDF
    Metal nanoparticles (NPs) made of gold, silver, and platinum have been synthesized by means of pulsed laser ablation in liquid aqueous solution. Independently from the metal nature, all NPs have an average diameter of 10 ± 5 nm. The ζ-potential values are:-62 ± 7 mV for gold,-44 ± 2 mV for silver and-58 ± 3 for platinum. XPS analysis demonstrates the absence of metal oxides in the case of gold and silver NPs. In the case of platinum NPs, 22% of the particle surface is ascribed to platinum oxidized species. This points to a marginal role of the metal oxides in building the negative charge that stabilizes these colloidal suspensions. The investigation of the colloidal stability of gold NPs in the presence of metal cations shows these NPs can be destabilized by trace amounts of selected metal ions. The case of Ag+ is paradigmatic since it is able to reduce the NP ζ-potential and to induce coagulation at concentrations as low as 3 μM, while in the case of K+ the critical coagulation concentration is around 8 mM. It is proposed that such a huge difference in destabilization power between monovalent cations can be accounted for by the difference in the reduction potential
    corecore