110 research outputs found

    1D analysis of 2D isotropic random walks

    Full text link
    Many stochastic systems in physics and biology are investigated by recording the two-dimensional (2D) positions of a moving test particle in regular time intervals. The resulting sample trajectories are then used to induce the properties of the underlying stochastic process. Often, it can be assumed a priori that the underlying discrete-time random walk model is independent from absolute position (homogeneity), direction (isotropy) and time (stationarity), as well as ergodic. In this article we first review some common statistical methods for analyzing 2D trajectories, based on quantities with built-in rotational invariance. We then discuss an alternative approach in which the two-dimensional trajectories are reduced to one dimension by projection onto an arbitrary axis and rotational averaging. Each step of the resulting 1D trajectory is further factorized into sign and magnitude. The statistical properties of the signs and magnitudes are mathematically related to those of the step lengths and turning angles of the original 2D trajectories, demonstrating that no essential information is lost by this data reduction. The resulting binary sequence of signs lends itself for a pattern counting analysis, revealing temporal properties of the random process that are not easily deduced from conventional measures such as the velocity autocorrelation function. In order to highlight this simplified 1D description, we apply it to a 2D random walk with restricted turning angles (RTA model), defined by a finite-variance distribution p(L)p(L) of step length and a narrow turning angle distribution p(Ï•)p(\phi), assuming that the lengths and directions of the steps are independent

    Principles of efficient chemotactic pursuit

    Full text link
    In chemotaxis, cells are modulating their migration patterns in response to concentration gradients of a guiding substance. Immune cells are believed to use such chemotactic sensing for remotely detecting and homing in on pathogens. Considering that an immune cells may encounter a multitude of targets with vastly different migration properties, ranging from immobile to highly mobile, it is not clear which strategies of chemotactic pursuit are simultaneously efficient and versatile. We takle this problem theoretically and define a tunable response function that maps temporal or spatial concentration gradients to migration behavior. The seven free parameters of this response function are optimized numerically with the objective of maximizing search efficiency against a wide spectrum of target cell properties. Finally, we reverse-engineer the best-performing parameter sets to uncover the principles of efficient chemotactic pursuit under different biologically realistic boundary conditions. Remarkably, the numerical optimization rediscovers chemotactic strategies that are well-known in biological systems, such as the gradient-dependent swimming and tumbling modes of E.coli. Some of our results may also be useful for the design of chemotaxis experiments and for the development of algorithms that automatically detect and quantify goal oriented behavior in measured immune cell trajectories

    Scaling properties of correlated random walks

    Full text link
    Many stochastic time series can be modelled by discrete random walks in which a step of random sign but constant length δx\delta x is performed after each time interval δt\delta t. In correlated discrete time random walks (CDTRWs), the probability qq for two successive steps having the same sign is unequal 1/2. The resulting probability distribution P(Δx,Δt)P(\Delta x,\Delta t) that a displacement Δx\Delta x is observed after a lagtime Δt\Delta t is known analytically for arbitrary persistence parameters qq. In this short note we show how a CDTRW with parameters [δt,δx,q][\delta t, \delta x, q] can be mapped onto another CDTRW with rescaled parameters [δt/s,δx⋅g(q,s),q′(q,s)][\delta t/s, \delta x\cdot g(q,s), q^{\prime}(q,s)], for arbitrary scaling parameters ss, so that both walks have the same displacement distributions P(Δx,Δt)P(\Delta x,\Delta t) on long time scales. The nonlinear scaling functions g(q,s)g(q,s) and q′(q,s)q^{\prime}(q,s) and derived explicitely. This scaling method can be used to model time series measured at discrete sample intervals δt\delta t but actually corresponding to continuum processes with variations occuring on a much shorter time scale δt/s\delta t/s

    Bayesian inference of time varying parameters in autoregressive processes

    Full text link
    In the autoregressive process of first order AR(1), a homogeneous correlated time series utu_t is recursively constructed as ut=q  ut−1+σ  ϵtu_t = q\; u_{t-1} + \sigma \;\epsilon_t, using random Gaussian deviates ϵt\epsilon_t and fixed values for the correlation coefficient qq and for the noise amplitude σ\sigma. To model temporally heterogeneous time series, the coefficients qtq_t and σt\sigma_t can be regarded as time-dependend variables by themselves, leading to the time-varying autoregressive processes TVAR(1). We assume here that the time series utu_t is known and attempt to infer the temporal evolution of the 'superstatistical' parameters qtq_t and σt\sigma_t. We present a sequential Bayesian method of inference, which is conceptually related to the Hidden Markov model, but takes into account the direct statistical dependence of successively measured variables utu_t. The method requires almost no prior knowledge about the temporal dynamics of qtq_t and σt\sigma_t and can handle gradual and abrupt changes of these superparameters simultaneously. We compare our method with a Maximum Likelihood estimate based on a sliding window and show that it is superior for a wide range of window sizes

    Inferring long-range interactions between immune and tumor cells -- pitfalls and (partial) solutions

    Full text link
    Upcoming immunotherapies for cancer treatment rely on the ability of the immune system to detect and eliminate tumors in the body. A highly simplified version of this process can be studied in a Petri dish: starting with a random distribution of immune and tumor cells, it can be observed in detail how individual immune cells migrate towards nearby tumor cells, establish contact, and attack. Nevertheless, it remains unclear whether the immune cells find their targets by chance, or if they approach them 'on purpose', using remote sensing mechanisms such as chemotaxis. In this work, we present methods to infer the strength and range of long-range cell-cell interactions from time-lapse recorded cell trajectories, using a maximum likelihood method to fit the model parameters. First, we model the interactions as a distance-dependent 'force' that attracts immune cells towards their nearest tumor cell. While this approach correctly recovers the interaction parameters of simulated cells with constant migration properties, it detects spurious interactions in the case of independent cells that spontaneously change their migration behavior over time. We therefore use an alternative approach that models the interactions by distance-dependent probabilities for positive and negative turning angles of the migrating immune cell. We demonstrate that the latter approach finds the correct interaction parameters even with temporally switching cell migration

    Detecting long-range attraction between migrating cells based on p-value distributions

    Full text link
    Immune cells have evolved to recognize and eliminate pathogens, and the efficiency of this process can be measured in a Petri dish. Yet, even if the cells are time-lapse recorded and tracked with high resolution, it is difficult to judge whether the immune cells find their targets by mere chance, or if they approach them in a goal-directed way, perhaps using remote sensing mechanisms such as chemotaxis. To answer this question, we assign to each step of an immune cell a 'p-value', the probability that a move, at least as target-directed as observed, can be explained with target-independent migration behavior. The resulting distribution of p-values is compared to the distribution of a reference system with randomized target positions. By using simulated data, based on various chemotactic search mechanisms, we demonstrate that our method can reliably distinguish between blind migration and target-directed 'hunting' behavior

    Adaptive stochastic resonance based on output autocorrelations

    Full text link
    Successful detection of weak signals is a universal challenge for numerous technical and biological systems and crucially limits signal transduction and transmission. Stochastic resonance (SR) has been identified to have the potential to tackle this problem, namely to enable non-linear systems to detect small, otherwise sub-threshold signals by means of added non-zero noise. This has been demonstrated within a wide range of systems in physical, technological and biological contexts. Based on its ubiquitous importance, numerous theoretical and technical approaches aim at an optimization of signal transduction based on SR. Several quantities like mutual information, signal-to-noise-ratio, or the cross-correlation between input stimulus and resulting detector response have been used to determine optimal noise intensities for SR. The fundamental shortcoming with all these measures is that knowledge of the signal to be detected is required to compute them. This dilemma prevents the use of adaptive SR procedures in any application where the signal to be detected is unknown. We here show that the autocorrelation function (AC) of the detector response fundamentally overcomes this drawback. For a simplified model system, the equivalence of the output AC with the measures mentioned above is proven analytically. In addition, we test our approach numerically for a variety of systems comprising different input signals and different types of detectors. The results indicate a strong similarity between mutual information and output AC in terms of the optimal noise intensity for SR. Hence, using the output AC to adaptively vary the amount of added noise in order to maximize information transmission via SR might be a fundamental processing principle in nature, in particular within neural systems which could be implemented in future technical applications

    Stochastic resonance in three-neuron motifs

    Full text link
    Stochastic resonance is a non-linear phenomenon, in which the sensitivity of signal detectors can be enhanced by adding random noise to the detector input. Here, we demonstrate that noise can also improve the information flux in recurrent neural networks. In particular, we show for the case of three-neuron motifs that the mutual information between successive network states can be maximized by adding a suitable amount of noise to the neuron inputs. This striking result suggests that noise in the brain may not be a problem that needs to be suppressed, but indeed a resource that is dynamically regulated in order to optimize information processing

    Analysis of structure and dynamics in three-neuron motifs

    Full text link
    In neural networks with identical neurons, the matrix of connection weights completely describes the network structure and thereby determines how it is processing information. However, due to the non-linearity of these systems, it is not clear if similar microscopic connection structures also imply similar functional properties, or if a network is impacted more by macroscopic structural quantities, such as the ratio of excitatory and inhibitory connections (balance), or the ratio of non-zero connections (density). To clarify these questions, we focus on motifs of three binary neurons with discrete ternary connection strengths, an important class of network building blocks that can be analyzed exhaustively. We develop new, permutation-invariant metrics to quantify the structural and functional distance between two given network motifs. We then use multidimensional scaling to identify and visualize clusters of motifs with similar structural and functional properties. Our comprehensive analysis reveals that the function of a neural network is only weakly correlated with its microscopic structure, but depends strongly on the balance of the connections

    Reconstructing fiber networks from confocal image stacks

    Full text link
    We present a numerically efficient method to reconstruct a disordered network of thin biopolymers, such as collagen gels, from three-dimensional (3D) image stacks recorded with a confocal microscope. Our method is based on a template matching algorithm that simultaneously performs a binarization and skeletonization of the network. The size and intensity pattern of the template is automatically adapted to the input data so that the method is scale invariant and generic. Furthermore, the template matching threshold is iteratively optimized to ensure that the final skeletonized network obeys a universal property of voxelized random line networks, namely, solid-phase voxels have most likely three solid-phase neighbors in a 3×33\times3 neighborhood. This optimization criterion makes our method free of user-defined parameters and the output exceptionally robust against imaging noise
    • …
    corecore