7,345 research outputs found

    Patterns, causes, and consequences of marine larval dispersal

    Full text link
    Quantifying the probability of larval exchange among marine populations is key to predicting local population dynamics and optimizing networks of marine protected areas. The pattern of connectivity among populations can be described by the measurement of a dispersal kernel. However, a statistically robust, empirical dispersal kernel has been lacking for any marine species. Here, we use genetic parentage analysis to quantify a dispersal kernel for the reef fish Elacatinus lori, demonstrating that dispersal declines exponentially with distance. The spatial scale of dispersal is an order of magnitude less than previous estimates—the median dispersal distance is just 1.7 km and no dispersal events exceed 16.4 km despite intensive sampling out to 30 km from source. Overlaid on this strong pattern is subtle spatial variation, but neither pelagic larval duration nor direction is associated with the probability of successful dispersal. Given the strong relationship between distance and dispersal, we show that distance-driven logistic models have strong power to predict dispersal probabilities. Moreover, connectivity matrices generated from these models are congruent with empirical estimates of spatial genetic structure, suggesting that the pattern of dispersal we uncovered reflects long-term patterns of gene flow. These results challenge assumptions regarding the spatial scale and presumed predictors of marine population connectivity. We conclude that if marine reserve networks aim to connect whole communities of fishes and conserve biodiversity broadly, then reserves that are close in space (<10 km) will accommodate those members of the community that are short-distance dispersers.We thank Diana Acosta, Alben David, Kevin David, Alissa Rickborn, and Derek Scolaro for assistance with field work; Eliana Bondra for assistance with molecular work; and Peter Carlson for assistance with otolith work. We are grateful to Noel Anderson, David Lindo, Claire Paris, Robert Warner, Colleen Webb, and two anonymous reviewers for comments on this manuscript. This work was supported by National Science Foundation (NSF) Grant OCE-1260424, and C.C.D. was supported by NSF Graduate Research Fellowship DGE-1247312. All work was approved by Belize Fisheries and Boston University Institutional Animal Care and Use Committee. (OCE-1260424 - National Science Foundation (NSF); DGE-1247312 - NSF Graduate Research Fellowship)Published versio

    Integral curves of noisy vector fields and statistical problems in diffusion tensor imaging: nonparametric kernel estimation and hypotheses testing

    Full text link
    Let vv be a vector field in a bounded open set GRdG\subset {\mathbb {R}}^d. Suppose that vv is observed with a random noise at random points Xi,i=1,...,n,X_i, i=1,...,n, that are independent and uniformly distributed in G.G. The problem is to estimate the integral curve of the differential equation dx(t)dt=v(x(t)),t0,x(0)=x0G,\frac{dx(t)}{dt}=v(x(t)),\qquad t\geq 0,x(0)=x_0\in G, starting at a given point x(0)=x0Gx(0)=x_0\in G and to develop statistical tests for the hypothesis that the integral curve reaches a specified set ΓG.\Gamma\subset G. We develop an estimation procedure based on a Nadaraya--Watson type kernel regression estimator, show the asymptotic normality of the estimated integral curve and derive differential and integral equations for the mean and covariance function of the limit Gaussian process. This provides a method of tracking not only the integral curve, but also the covariance matrix of its estimate. We also study the asymptotic distribution of the squared minimal distance from the integral curve to a smooth enough surface ΓG\Gamma\subset G. Building upon this, we develop testing procedures for the hypothesis that the integral curve reaches Γ\Gamma. The problems of this nature are of interest in diffusion tensor imaging, a brain imaging technique based on measuring the diffusion tensor at discrete locations in the cerebral white matter, where the diffusion of water molecules is typically anisotropic. The diffusion tensor data is used to estimate the dominant orientations of the diffusion and to track white matter fibers from the initial location following these orientations. Our approach brings more rigorous statistical tools to the analysis of this problem providing, in particular, hypothesis testing procedures that might be useful in the study of axonal connectivity of the white matter.Comment: Published in at http://dx.doi.org/10.1214/009053607000000073 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Learning and comparing functional connectomes across subjects

    Get PDF
    Functional connectomes capture brain interactions via synchronized fluctuations in the functional magnetic resonance imaging signal. If measured during rest, they map the intrinsic functional architecture of the brain. With task-driven experiments they represent integration mechanisms between specialized brain areas. Analyzing their variability across subjects and conditions can reveal markers of brain pathologies and mechanisms underlying cognition. Methods of estimating functional connectomes from the imaging signal have undergone rapid developments and the literature is full of diverse strategies for comparing them. This review aims to clarify links across functional-connectivity methods as well as to expose different steps to perform a group study of functional connectomes

    Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

    Get PDF
    The primate visual system achieves remarkable visual object recognition performance even in brief presentations and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations such as the amount of noise, the number of neural recording sites, and the number trials, and computational limitations such as the complexity of the decoding classifier and the number of classifier training examples. In this work we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.Comment: 35 pages, 12 figures, extends and expands upon arXiv:1301.353

    Learning Laplacian Matrix in Smooth Graph Signal Representations

    Full text link
    The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior

    Static and dynamic measures of human brain connectivity predict complementary aspects of human cognitive performance

    Get PDF
    In cognitive network neuroscience, the connectivity and community structure of the brain network is related to cognition. Much of this research has focused on two measures of connectivity - modularity and flexibility - which frequently have been examined in isolation. By using resting state fMRI data from 52 young adults, we investigate the relationship between modularity, flexibility and performance on cognitive tasks. We show that flexibility and modularity are highly negatively correlated. However, we also demonstrate that flexibility and modularity make unique contributions to explain task performance, with modularity predicting performance for simple tasks and flexibility predicting performance on complex tasks that require cognitive control and executive functioning. The theory and results presented here allow for stronger links between measures of brain network connectivity and cognitive processes.Comment: 37 pages; 7 figure
    corecore