117,183 research outputs found
Adaptive Random Walks on the Class of Web Graph
We study random walk with adaptive move strategies on a class of directed
graphs with variable wiring diagram. The graphs are grown from the evolution
rules compatible with the dynamics of the world-wide Web [Tadi\'c, Physica A
{\bf 293}, 273 (2001)], and are characterized by a pair of power-law
distributions of out- and in-degree for each value of the parameter ,
which measures the degree of rewiring in the graph. The walker adapts its move
strategy according to locally available information both on out-degree of the
visited node and in-degree of target node. A standard random walk, on the other
hand, uses the out-degree only. We compute the distribution of connected
subgraphs visited by an ensemble of walkers, the average access time and
survival probability of the walks. We discuss these properties of the walk
dynamics relative to the changes in the global graph structure when the control
parameter is varied. For , corresponding to the
world-wide Web, the access time of the walk to a given level of hierarchy on
the graph is much shorter compared to the standard random walk on the same
graph. By reducing the amount of rewiring towards rigidity limit \beta \to
\beta_c \lesss im 0.1, corresponding to the range of naturally occurring
biochemical networks, the survival probability of adaptive and standard random
walk become increasingly similar. The adaptive random walk can be used as an
efficient message-passing algorithm on this class of graphs for large degree of
rewiring.Comment: 8 pages, including 7 figures; to appear in Europ. Phys. Journal
Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning
We consider the estimation of an i.i.d. (possibly non-Gaussian) vector \xbf
\in \R^n from measurements \ybf \in \R^m obtained by a general cascade model
consisting of a known linear transform followed by a probabilistic
componentwise (possibly nonlinear) measurement channel. A novel method, called
adaptive generalized approximate message passing (Adaptive GAMP), that enables
joint learning of the statistics of the prior and measurement channel along
with estimation of the unknown vector \xbf is presented. The proposed
algorithm is a generalization of a recently-developed EM-GAMP that uses
expectation-maximization (EM) iterations where the posteriors in the E-steps
are computed via approximate message passing. The methodology can be applied to
a large class of learning problems including the learning of sparse priors in
compressed sensing or identification of linear-nonlinear cascade models in
dynamical systems and neural spiking processes. We prove that for large i.i.d.
Gaussian transform matrices the asymptotic componentwise behavior of the
adaptive GAMP algorithm is predicted by a simple set of scalar state evolution
equations. In addition, we show that when a certain maximum-likelihood
estimation can be performed in each step, the adaptive GAMP method can yield
asymptotically consistent parameter estimates, which implies that the algorithm
achieves a reconstruction quality equivalent to the oracle algorithm that knows
the correct parameter values. Remarkably, this result applies to essentially
arbitrary parametrizations of the unknown distributions, including ones that
are nonlinear and non-Gaussian. The adaptive GAMP methodology thus provides a
systematic, general and computationally efficient method applicable to a large
range of complex linear-nonlinear models with provable guarantees.Comment: 14 pages, 3 figure
A Preference Model on Adaptive Affinity Propagation
In recent years, two new data clustering algorithms have been proposed. One of them isAffinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure
- …