3,253 research outputs found

    On the stability of homogeneous solutions to some aggregation models

    Get PDF
    Vasculogenesis, i.e. self-assembly of endothelial cells leading to capillary network formation, has been the object of many experimental investigations in recent years, due to its relevance both in physiological and in pathological conditions. We performed a detailed linear stability analysis of two models of in vitro vasculogenesis, with the aim of checking their potential for structure formation starting from initial data representing a continuum cell monolayer. The first model turns out to be unstable at low cell densities, while pressure stabilizes it at high densities. The second model is instead stable at low cell densities. Detailed information about the instability regions and the structure of the critical wave numbers are obtained in several interesting limiting cases. We expect that altogether, this information will be useful for further comparisons of the two models with experiments

    Complexity, parallel computation and statistical physics

    Full text link
    The intuition that a long history is required for the emergence of complexity in natural systems is formalized using the notion of depth. The depth of a system is defined in terms of the number of parallel computational steps needed to simulate it. Depth provides an objective, irreducible measure of history applicable to systems of the kind studied in statistical physics. It is argued that physical complexity cannot occur in the absence of substantial depth and that depth is a useful proxy for physical complexity. The ideas are illustrated for a variety of systems in statistical physics.Comment: 21 pages, 7 figure

    Internal Diffusion-Limited Aggregation: Parallel Algorithms and Complexity

    Get PDF
    The computational complexity of internal diffusion-limited aggregation (DLA) is examined from both a theoretical and a practical point of view. We show that for two or more dimensions, the problem of predicting the cluster from a given set of paths is complete for the complexity class CC, the subset of P characterized by circuits composed of comparator gates. CC-completeness is believed to imply that, in the worst case, growing a cluster of size n requires polynomial time in n even on a parallel computer. A parallel relaxation algorithm is presented that uses the fact that clusters are nearly spherical to guess the cluster from a given set of paths, and then corrects defects in the guessed cluster through a non-local annihilation process. The parallel running time of the relaxation algorithm for two-dimensional internal DLA is studied by simulating it on a serial computer. The numerical results are compatible with a running time that is either polylogarithmic in n or a small power of n. Thus the computational resources needed to grow large clusters are significantly less on average than the worst-case analysis would suggest. For a parallel machine with k processors, we show that random clusters in d dimensions can be generated in O((n/k + log k) n^{2/d}) steps. This is a significant speedup over explicit sequential simulation, which takes O(n^{1+2/d}) time on average. Finally, we show that in one dimension internal DLA can be predicted in O(log n) parallel time, and so is in the complexity class NC

    The Human version of Moore-Shannon's Theorem: The Design of Reliable Economic Systems

    Get PDF
    Moore & Shannon's theorem is the cornerstone in reliability theory, but cannot be applied to human systems in its original form. A generalization to human systems would therefore be of considerable interest because the choice of organization structure can remedy reliability problems that notoriously plaque business operations, financial institutions, military intelligence and other human activities. Our main result is a proof that provides answers to the following three questions. Is it possible to design a reliable social organization from fallible human individuals? How many fallible human agents are required to build an economic system of a certain level of reliability? What is the best way to design an organization of two or more agents in order to minimize error? On the basis of constructive proofs, this paper provides answers to these questions and thus offers a method to analyze any form of decision making structure with respect to its reliability.Organizational design; reliability theory; decision making; project selection

    Time-Dependent Density Matrix Renormalization Group Algorithms for Nearly Exact Absorption and Fluorescence Spectra of Molecular Aggregates at Both Zero and Finite Temperature

    Get PDF
    We implement and apply time-dependent density matrix renormalization group (TD-DMRG) algorithms at zero and finite temperature to compute the linear absorption and fluorescence spectra of molecular aggregates. Our implementation is within a matrix product state/operator framework with an explicit treatment of the excitonic and vibrational degrees of freedom, and uses the locality of the Hamiltonian in the zero-exciton space to improve the efficiency and accuracy of the calculations. We demonstrate the power of the method by calculations on several molecular aggregate models, comparing our results against those from multi-layer multiconfiguration time- dependent Hartree and n-particle approximations. We find that TD-DMRG provides an accurate and efficient route to calculate the spectrum of molecular aggregates.Comment: 10 figure
    • …
    corecore