22,830 research outputs found

    Filling of a Poisson trap by a population of random intermittent searchers

    Get PDF
    We extend the continuum theory of random intermittent search processes to the case of NN independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi--infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to nn successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N1/N, we show that there exists a well--defined mean field limit N→∞N\rightarrow \infty, in which the stochastic model reduces to a deterministic system of linear reaction--hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time--dependent rate of filling λ(t)\lambda(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with nn particles in terms of the waiting time density fn(t)f_n(t). The latter is determined by the integrated Poisson rate μ(t)=∫0tλ(s)ds\mu(t)=\int_0^t\lambda(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection--diffusion equation using a quasi-steady-state analysis. We compare our analytical results for the mean--field model with Monte-Carlo simulations for finite NN. We thus determine how the mean first passage time (MFPT) for filling the target depends on NN and nn

    Phase transitions induced by microscopic disorder: a study based on the order parameter expansion

    Get PDF
    Based on the order parameter expansion, we present an approximate method which allows us to reduce large systems of coupled differential equations with diverse parameters to three equations: one for the global, mean field, variable and two which describe the fluctuations around this mean value. With this tool we analyze phase-transitions induced by microscopic disorder in three prototypical models of phase-transitions which have been studied previously in the presence of thermal noise. We study how macroscopic order is induced or destroyed by time independent local disorder and analyze the limits of the approximation by comparing the results with the numerical solutions of the self-consistency equation which arises from the property of self-averaging. Finally, we carry on a finite-size analysis of the numerical results and calculate the corresponding critical exponents

    Fitting Prediction Rule Ensembles with R Package pre

    Get PDF
    Prediction rule ensembles (PREs) are sparse collections of rules, offering highly interpretable regression and classification models. This paper presents the R package pre, which derives PREs through the methodology of Friedman and Popescu (2008). The implementation and functionality of package pre is described and illustrated through application on a dataset on the prediction of depression. Furthermore, accuracy and sparsity of PREs is compared with that of single trees, random forest and lasso regression in four benchmark datasets. Results indicate that pre derives ensembles with predictive accuracy comparable to that of random forests, while using a smaller number of variables for prediction

    Lecture notes on ridge regression

    Full text link
    The linear regression model cannot be fitted to high-dimensional data, as the high-dimensionality brings about empirical non-identifiability. Penalized regression overcomes this non-identifiability by augmentation of the loss function by a penalty (i.e. a function of regression coefficients). The ridge penalty is the sum of squared regression coefficients, giving rise to ridge regression. Here many aspect of ridge regression are reviewed e.g. moments, mean squared error, its equivalence to constrained estimation, and its relation to Bayesian regression. Finally, its behaviour and use are illustrated in simulation and on omics data. Subsequently, ridge regression is generalized to allow for a more general penalty. The ridge penalization framework is then translated to logistic regression and its properties are shown to carry over. To contrast ridge penalized estimation, the final chapter introduces its lasso counterpart
    • …
    corecore