244,040 research outputs found
Process Completing Sequences for Resource Allocation Systems with Synchronization
This paper considers the problem of establishing live resource allocation in workflows with synchronization stages. Establishing live resource allocation in this class of systems is challenging since deciding whether a given level of resource capacities is sufficient to complete a single process is NP-complete. In this paper, we develop two necessary conditions and one sufficient condition that provide quickly computable tests for the existence of process completing sequences. The necessary conditions are based on the sequence of completions of � subprocesses that merge together at a synchronization. Although the worst case complexity is O(2�), we expect the number of subprocesses combined at any synchronization will be sufficiently small so that total computation time remains manageable. The sufficient condition uses a reduction scheme that computes a sufficient capacity level of each resource type to complete and merge all � subprocesses. The worst case complexity is O(�⋅�), where � is the number of synchronizations. Finally, the paper develops capacity bounds and polynomial methods for generating feasible resource allocation sequences for merging systems with single unit allocation. This method is based on single step look-ahead for deadly marked siphons and is O(2�). Throughout the paper, we use a class of Petri nets called Generalized Augmented Marked Graphs to represent our resource allocation systems
Inhomogeneous percolation models for spreading phenomena in random graphs
Percolation theory has been largely used in the study of structural
properties of complex networks such as the robustness, with remarkable results.
Nevertheless, a purely topological description is not sufficient for a correct
characterization of networks behaviour in relation with physical flows and
spreading phenomena taking place on them. The functionality of real networks
also depends on the ability of the nodes and the edges in bearing and handling
loads of flows, energy, information and other physical quantities. We propose
to study these properties introducing a process of inhomogeneous percolation,
in which both the nodes and the edges spread out the flows with a given
probability.
Generating functions approach is exploited in order to get a generalization
of the Molloy-Reed Criterion for inhomogeneous joint site bond percolation in
correlated random graphs. A series of simple assumptions allows the analysis of
more realistic situations, for which a number of new results are presented. In
particular, for the site percolation with inhomogeneous edge transmission, we
obtain the explicit expressions of the percolation threshold for many
interesting cases, that are analyzed by means of simple examples and numerical
simulations. Some possible applications are debated.Comment: 28 pages, 11 figure
Redundancy Scheduling with Locally Stable Compatibility Graphs
Redundancy scheduling is a popular concept to improve performance in
parallel-server systems. In the baseline scenario any job can be handled
equally well by any server, and is replicated to a fixed number of servers
selected uniformly at random. Quite often however, there may be heterogeneity
in job characteristics or server capabilities, and jobs can only be replicated
to specific servers because of affinity relations or compatibility constraints.
In order to capture such situations, we consider a scenario where jobs of
various types are replicated to different subsets of servers as prescribed by a
general compatibility graph. We exploit a product-form stationary distribution
and weak local stability conditions to establish a state space collapse in
heavy traffic. In this limiting regime, the parallel-server system with
graph-based redundancy scheduling operates as a multi-class single-server
system, achieving full resource pooling and exhibiting strong insensitivity to
the underlying compatibility constraints.Comment: 28 pages, 4 figure
Graphical Markov models, unifying results and their interpretation
Graphical Markov models combine conditional independence constraints with
graphical representations of stepwise data generating processes.The models
started to be formulated about 40 years ago and vigorous development is
ongoing. Longitudinal observational studies as well as intervention studies are
best modeled via a subclass called regression graph models and, especially
traceable regressions. Regression graphs include two types of undirected graph
and directed acyclic graphs in ordered sequences of joint responses. Response
components may correspond to discrete or continuous random variables and may
depend exclusively on variables which have been generated earlier. These
aspects are essential when causal hypothesis are the motivation for the
planning of empirical studies.
To turn the graphs into useful tools for tracing developmental pathways and
for predicting structure in alternative models, the generated distributions
have to mimic some properties of joint Gaussian distributions. Here, relevant
results concerning these aspects are spelled out and illustrated by examples.
With regression graph models, it becomes feasible, for the first time, to
derive structural effects of (1) ignoring some of the variables, of (2)
selecting subpopulations via fixed levels of some other variables or of (3)
changing the order in which the variables might get generated. Thus, the most
important future applications of these models will aim at the best possible
integration of knowledge from related studies.Comment: 34 Pages, 11 figures, 1 tabl
Data granulation by the principles of uncertainty
Researches in granular modeling produced a variety of mathematical models,
such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets,
which are all suitable to characterize the so-called information granules.
Modeling of the input data uncertainty is recognized as a crucial aspect in
information granulation. Moreover, the uncertainty is a well-studied concept in
many mathematical settings, such as those of probability theory, fuzzy set
theory, and possibility theory. This fact suggests that an appropriate
quantification of the uncertainty expressed by the information granule model
could be used to define an invariant property, to be exploited in practical
situations of information granulation. In this perspective, a procedure of
information granulation is effective if the uncertainty conveyed by the
synthesized information granule is in a monotonically increasing relation with
the uncertainty of the input data. In this paper, we present a data granulation
framework that elaborates over the principles of uncertainty introduced by
Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is
possible to apply such principles regardless of the input data type and the
specific mathematical setting adopted for the information granules. The
proposed framework is conceived (i) to offer a guideline for the synthesis of
information granules and (ii) to build a groundwork to compare and
quantitatively judge over different data granulation procedures. To provide a
suitable case study, we introduce a new data granulation technique based on the
minimum sum of distances, which is designed to generate type-2 fuzzy sets. We
analyze the procedure by performing different experiments on two distinct data
types: feature vectors and labeled graphs. Results show that the uncertainty of
the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference
- …