10,384 research outputs found
Recommended from our members
Graph models for reachability analysis of concurrent programs
Reachability analysis is an attractive technique for analysis of concurrent programs because it is simple and relatively straightforward to automate, and can be used in conjunction with model-checking procedures to check for application-specific as well as general properties. Several techniques have been proposed differing mainly on the model used; some of these propose the use of flowgraph based models, some others of Petri nets.This paper addresses the question: What essential difference does it make, if any, what sort of finite-state model we extract from program texts for purposes of reachability analysis? How do they differ in expressive power, decision power, or accuracy? Since each is intended to model synchronization structure while abstracting away other features, one would expect them to be roughly equivalent.We confirm that there is no essential semantic difference between the most well known models proposed in the literature by providing algorithms for translation among these models. This implies that the choice of model rests on other factors, including convenience and efficiency.Since combinatorial explosion is the primary impediment to application of reachability analysis, a particular concern in choosing a model is facilitating divide-and-conquer analysis of large programs. Recently, much interest in finite-state verification systems has centered on algebraic theories of concurrency. Yeh and Young have exploited algebraic structure to decompose reachability analysis based on a flowgraph model. The semantic equivalence of graph and Petri net based models suggests that one ought to be able to apply a similar strategy for decomposing Petri nets. We show this is indeed possible through application of category theory
Simulation of instability at transition energy with a new impedance model for CERN PS
Instabilities driven by the transverse impedance are proven to be one of the limitations for the high intensity reach of the CERN PS. Since several years, fast single bunch vertical instability at transition energy has been observed with the high intensity bunch serving the neutron Time-of-Flight facility (n-ToF). In order to better understand the instability mechanism, a dedicated measurement campaign took place. The results were compared with macro-particle simulations with PyHEADTAIL based on the new impedance model developed for the PS. Instability threshold and growth rate for different longitudinal emittances and beam intensities were studied
Interpretation of AMS-02 electrons and positrons data
We perform a combined analysis of the recent AMS-02 data on electrons,
positrons, electrons plus positrons and positron fraction, in a self-consistent
framework where we realize a theoretical modeling of all the astrophysical
components that can contribute to the observed fluxes in the whole energy
range. The primary electron contribution is modeled through the sum of an
average flux from distant sources and the fluxes from the local supernova
remnants in the Green catalog. The secondary electron and positron fluxes
originate from interactions on the interstellar medium of primary cosmic rays,
for which we derive a novel determination by using AMS-02 proton and helium
data. Primary positrons and electrons from pulsar wind nebulae in the ATNF
catalog are included and studied in terms of their most significant (while
loosely known) properties and under different assumptions (average contribution
from the whole catalog, single dominant pulsar, a few dominant pulsars). We
obtain a remarkable agreement between our various modeling and the AMS-02 data
for all types of analysis, demonstrating that the whole AMS-02 leptonic data
admit a self-consistent interpretation in terms of astrophysical contributions.Comment: 33 pages, 26 figures and 4 tables, v2: accepted for publication in
JCAP, minor changes relative to v
Time and Geometric Quantization
In this paper we briefly review the functional version of the Koopman-von
Neumann operatorial approach to classical mechanics. We then show that its
quantization can be achieved by freezing to zero two Grassmannian partners of
time. This method of quantization presents many similarities with the one known
as Geometric Quantization.Comment: Talk given by EG at "Spacetime and Fundamental Interactions: Quantum
Aspects. A conference to honour A.P.Balachandran's 65th birthday
Flavored tetraquark spectroscopy
The recent confirmation of the charged charmonium like resonance Z(4430) by the LHCb experiment strongly suggests the existence of QCD multi quark bound states. Some preliminary results about hypothetical flavored tetraquark mesons are reported. Such states are particularly amenable to Lattice QCD studies as their interpolating operators do not overlap with those of ordinary hidden-charm mesons
Backbone of credit relationships in the Japanese credit market
We detect the backbone of the weighted bipartite network of the Japanese
credit market relationships. The backbone is detected by adapting a general
method used in the investigation of weighted networks. With this approach we
detect a backbone that is statistically validated against a null hypothesis of
uniform diversification of loans for banks and firms. Our investigation is done
year by year and it covers more than thirty years during the period from 1980
to 2011. We relate some of our findings with economic events that have
characterized the Japanese credit market during the last years. The study of
the time evolution of the backbone allows us to detect changes occurred in
network size, fraction of credit explained, and attributes characterizing the
banks and the firms present in the backbone.Comment: 14 pages, 8 figure
Bank-firm credit network in Japan. An analysis of a bipartite network
We present an analysis of the credit market of Japan. The analysis is
performed by investigating the bipartite network of banks and firms which is
obtained by setting a link between a bank and a firm when a credit relationship
is present in a given time window. In our investigation we focus on a community
detection algorithm which is identifying communities composed by both banks and
firms. We show that the clusters obtained by directly working on the bipartite
network carry information about the networked nature of the Japanese credit
market. Our analysis is performed for each calendar year during the time period
from 1980 to 2011. Specifically, we obtain communities of banks and networks
for each of the 32 investigated years, and we introduce a method to track the
time evolution of these communities on a statistical basis. We then
characterize communities by detecting the simultaneous over-expression of
attributes of firms and banks. Specifically, we consider as attributes the
economic sector and the geographical location of firms and the type of banks.
In our 32 year long analysis we detect a persistence of the over-expression of
attributes of clusters of banks and firms together with a slow dynamics of
changes from some specific attributes to new ones. Our empirical observations
show that the credit market in Japan is a networked market where the type of
banks, geographical location of firms and banks and economic sector of the firm
play a role in shaping the credit relationships between banks and firms.Comment: 9 pages, 4 figures, 2 Table
Detection of hidden structures on all scales in amorphous materials and complex physical systems: basic notions and applications to networks, lattice systems, and glasses
Recent decades have seen the discovery of numerous complex materials. At the
root of the complexity underlying many of these materials lies a large number
of possible contending atomic- and larger-scale configurations and the
intricate correlations between their constituents. For a detailed
understanding, there is a need for tools that enable the detection of pertinent
structures on all spatial and temporal scales. Towards this end, we suggest a
new method by invoking ideas from network analysis and information theory. Our
method efficiently identifies basic unit cells and topological defects in
systems with low disorder and may analyze general amorphous structures to
identify candidate natural structures where a clear definition of order is
lacking. This general unbiased detection of physical structure does not require
a guess as to which of the system properties should be deemed as important and
may constitute a natural point of departure for further analysis. The method
applies to both static and dynamic systems.Comment: (23 pages, 9 figures
- …