152 research outputs found

    Theory of continuum percolation I. General formalism

    Full text link
    The theoretical basis of continuum percolation has changed greatly since its beginning as little more than an analogy with lattice systems. Nevertheless, there is yet no comprehensive theory of this field. A basis for such a theory is provided here with the introduction of the Potts fluid, a system of interacting ss-state spins which are free to move in the continuum. In the s1s \to 1 limit, the Potts magnetization, susceptibility and correlation functions are directly related to the percolation probability, the mean cluster size and the pair-connectedness, respectively. Through the Hamiltonian formulation of the Potts fluid, the standard methods of statistical mechanics can therefore be used in the continuum percolation problem.Comment: 26 pages, Late

    Geometry dependence of the clogging transition in tilted hoppers

    Get PDF
    We report the effect of system geometry on the clogging of granular material flowing out of flat-bottomed hoppers with variable aperture size D. For such systems, there exists a critical aperture size Dc at which there is a divergence in the time for a flow to clog. To better understand the origins of Dc, we perturb the system by tilting the hopper an angle Q and mapping out a clogging phase diagram as a function of Q and D. The clogging transition demarcates the boundary between the freely-flowing (large D, small Q) and clogging (small D, large Q) regimes. We investigate how the system geometry affects Dc by mapping out this phase diagram for hoppers with either a circular hole or a rectangular narrow slit. Additionally, we vary the grain shape, investigating smooth spheres (glass beads), compact angular grains (beach sand), disk-like grains (lentils), and rod-like grains (rice). We find that the value of Dc grows with increasing Q, diverging at pi-Qr where Qr is the angle of repose. For circular apertures, the shape of the clogging transition is the same for all grain types. However, this is not the case for the narrow slit apertures, where the rate of growth of the critical hole size with tilt angle depends on the material

    Exact solution of a one-dimensional continuum percolation model

    Full text link
    I consider a one dimensional system of particles which interact through a hard core of diameter \si and can connect to each other if they are closer than a distance dd. The mean cluster size increases as a function of the density ρ\rho until it diverges at some critical density, the percolation threshold. This system can be mapped onto an off-lattice generalization of the Potts model which I have called the Potts fluid, and in this way, the mean cluster size, pair connectedness and percolation probability can be calculated exactly. The mean cluster size is S = 2 \exp[ \rho (d -\si)/(1 - \rho \si)] - 1 and diverges only at the close packing density \rho_{cp} = 1 / \si . This is confirmed by the behavior of the percolation probability. These results should help in judging the effectiveness of approximations or simulation methods before they are applied to higher dimensions.Comment: 21 pages, Late

    Theory of continuum percolation II. Mean field theory

    Full text link
    I use a previously introduced mapping between the continuum percolation model and the Potts fluid to derive a mean field theory of continuum percolation systems. This is done by introducing a new variational principle, the basis of which has to be taken, for now, as heuristic. The critical exponents obtained are β=1\beta= 1, γ=1\gamma= 1 and ν=0.5\nu = 0.5, which are identical with the mean field exponents of lattice percolation. The critical density in this approximation is \rho_c = 1/\ve where \ve = \int d \x \, p(\x) \{ \exp [- v(\x)/kT] - 1 \}. p(\x) is the binding probability of two particles separated by \x and v(\x) is their interaction potential.Comment: 25 pages, Late

    The OBO Foundry: Coordinated Evolution of Ontologies to Support Biomedical Data Integration

    Get PDF
    The value of any kind of data is greatly enhanced when it exists in a form that allows it to be integrated with other data. One approach to integration is through the annotation of multiple bodies of data using common controlled vocabularies or ‘ontologies’. Unfortunately, the very success of this approach has led to a proliferation of ontologies, which itself creates obstacles to integration. The Open Biomedical Ontologies (OBO) consortium has set in train a strategy to overcome this problem. Existing OBO ontologies, including the Gene Ontology, are undergoing a process of coordinated reform, and new ontologies being created, on the basis of an evolving set of shared principles governing ontology development. The result is an expanding family of ontologies designed to be interoperable, logically well-formed, and to incorporate accurate representations of biological reality. We describe the OBO Foundry initiative, and provide guidelines for those who might wish to become involved in the future

    Theory of continuum percolation III. Low density expansion

    Full text link
    We use a previously introduced mapping between the continuum percolation model and the Potts fluid (a system of interacting s-states spins which are free to move in the continuum) to derive the low density expansion of the pair connectedness and the mean cluster size. We prove that given an adequate identification of functions, the result is equivalent to the density expansion derived from a completely different point of view by Coniglio et al. [J. Phys A 10, 1123 (1977)] to describe physical clustering in a gas. We then apply our expansion to a system of hypercubes with a hard core interaction. The calculated critical density is within approximately 5% of the results of simulations, and is thus much more precise than previous theoretical results which were based on integral equations. We suggest that this is because integral equations smooth out overly the partition function (i.e., they describe predominantly its analytical part), while our method targets instead the part which describes the phase transition (i.e., the singular part).Comment: 42 pages, Revtex, includes 5 EncapsulatedPostscript figures, submitted to Phys Rev

    Generalized model for dynamic percolation

    Full text link
    We study the dynamics of a carrier, which performs a biased motion under the influence of an external field E, in an environment which is modeled by dynamic percolation and created by hard-core particles. The particles move randomly on a simple cubic lattice, constrained by hard-core exclusion, and they spontaneously annihilate and re-appear at some prescribed rates. Using decoupling of the third-order correlation functions into the product of the pairwise carrier-particle correlations we determine the density profiles of the "environment" particles, as seen from the stationary moving carrier, and calculate its terminal velocity, V_c, as the function of the applied field and other system parameters. We find that for sufficiently small driving forces the force exerted on the carrier by the "environment" particles shows a viscous-like behavior. An analog Stokes formula for such dynamic percolative environments and the corresponding friction coefficient are derived. We show that the density profile of the environment particles is strongly inhomogeneous: In front of the stationary moving carrier the density is higher than the average density, ρs\rho_s, and approaches the average value as an exponential function of the distance from the carrier. Past the carrier the local density is lower than ρs\rho_s and the relaxation towards ρs\rho_s may proceed differently depending on whether the particles number is or is not explicitly conserved.Comment: Latex, 32 pages, 4 ps-figures, submitted to PR

    The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience

    Get PDF
    With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line

    A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework

    Get PDF
    The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is ‘hidden’ from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to reduce the workload of the curators, it has resulted in valuable analytic by-products that address accessibility, use and citation of resources that can now be shared with resource owners and the larger scientific community

    Disease Ontology: a backbone for disease semantic integration

    Get PDF
    The Disease Ontology (DO) database (http://disease-ontology.org) represents a comprehensive knowledge base of 8043 inherited, developmental and acquired human diseases (DO version 3, revision 2510). The DO web browser has been designed for speed, efficiency and robustness through the use of a graph database. Full-text contextual searching functionality using Lucene allows the querying of name, synonym, definition, DOID and cross-reference (xrefs) with complex Boolean search strings. The DO semantically integrates disease and medical vocabularies through extensive cross mapping and integration of MeSH, ICD, NCI's thesaurus, SNOMED CT and OMIM disease-specific terms and identifiers. The DO is utilized for disease annotation by major biomedical databases (e.g. Array Express, NIF, IEDB), as a standard representation of human disease in biomedical ontologies (e.g. IDO, Cell line ontology, NIFSTD ontology, Experimental Factor Ontology, Influenza Ontology), and as an ontological cross mappings resource between DO, MeSH and OMIM (e.g. GeneWiki). The DO project (http://diseaseontology.sf.net) has been incorporated into open source tools (e.g. Gene Answers, FunDO) to connect gene and disease biomedical data through the lens of human disease. The next iteration of the DO web browser will integrate DO's extended relations and logical definition representation along with these biomedical resource cross-mappings
    corecore