17,485 research outputs found

    A moment-matching method to study the variability of phenomena described by partial differential equations

    Get PDF
    International audienceMany phenomena are modeled by deterministic differential equations , whereas the observation of these phenomena, in particular in life sciences, exhibits an important variability. This paper addresses the following question: how can the model be adapted to reflect the observed variability? Given an adequate model, it is possible to account for this variability by allowing some parameters to adopt a stochastic behavior. Finding the parameters probability density function that explains the observed variability is a difficult stochastic inverse problem, especially when the computational cost of the forward problem is high. In this paper, a non-parametric and non-intrusive procedure based on offline computations of the forward model is proposed. It infers the probability density function of the uncertain parameters from the matching of the statistical moments of observable degrees of freedom (DOFs) of the model. This inverse procedure is improved by incorporating an algorithm that selects a subset of the model DOFs that both reduces its computational cost and increases its robustness. This algorithm uses the pre-computed model outputs to build an approximation of the local sensitivities. The DOFs are selected so that the maximum information on the sensitivities is conserved. The proposed approach is illustrated with elliptic and parabolic PDEs. In the Appendix, an nonlinear ODE is considered and the strategy is compared with two existing ones

    An Emergent Space for Distributed Data with Hidden Internal Order through Manifold Learning

    Full text link
    Manifold-learning techniques are routinely used in mining complex spatiotemporal data to extract useful, parsimonious data representations/parametrizations; these are, in turn, useful in nonlinear model identification tasks. We focus here on the case of time series data that can ultimately be modelled as a spatially distributed system (e.g. a partial differential equation, PDE), but where we do not know the space in which this PDE should be formulated. Hence, even the spatial coordinates for the distributed system themselves need to be identified - to emerge from - the data mining process. We will first validate this emergent space reconstruction for time series sampled without space labels in known PDEs; this brings up the issue of observability of physical space from temporal observation data, and the transition from spatially resolved to lumped (order-parameter-based) representations by tuning the scale of the data mining kernels. We will then present actual emergent space discovery illustrations. Our illustrative examples include chimera states (states of coexisting coherent and incoherent dynamics), and chaotic as well as quasiperiodic spatiotemporal dynamics, arising in partial differential equations and/or in heterogeneous networks. We also discuss how data-driven spatial coordinates can be extracted in ways invariant to the nature of the measuring instrument. Such gauge-invariant data mining can go beyond the fusion of heterogeneous observations of the same system, to the possible matching of apparently different systems

    Comparative Study of Homotopy Analysis and Renormalization Group Methods on Rayleigh and Van der Pol Equations

    Full text link
    A comparative study of the Homotopy Analysis method and an improved Renormalization Group method is presented in the context of the Rayleigh and the Van der Pol equations. Efficient approximate formulae as functions of the nonlinearity parameter ε\varepsilon for the amplitudes a(ε)a(\varepsilon) of the limit cycles for both these oscillators are derived. The improvement in the Renormalization group analysis is achieved by invoking the idea of nonlinear time that should have significance in a nonlinear system. Good approximate plots of limit cycles of the concerned oscillators are also presented within this framework.Comment: 25 pages, 7 figures. Revised and upgraded: Differ Equ Dyn Syst, (26 July, 2015

    Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins

    Get PDF
    In this work we propose an Uncertainty Quantification methodology for sedimentary basins evolution under mechanical and geochemical compaction processes, which we model as a coupled, time-dependent, non-linear, monodimensional (depth-only) system of PDEs with uncertain parameters. While in previous works (Formaggia et al. 2013, Porta et al., 2014) we assumed a simplified depositional history with only one material, in this work we consider multi-layered basins, in which each layer is characterized by a different material, and hence by different properties. This setting requires several improvements with respect to our earlier works, both concerning the deterministic solver and the stochastic discretization. On the deterministic side, we replace the previous fixed-point iterative solver with a more efficient Newton solver at each step of the time-discretization. On the stochastic side, the multi-layered structure gives rise to discontinuities in the dependence of the state variables on the uncertain parameters, that need an appropriate treatment for surrogate modeling techniques, such as sparse grids, to be effective. We propose an innovative methodology to this end which relies on a change of coordinate system to align the discontinuities of the target function within the random parameter space. The reference coordinate system is built upon exploiting physical features of the problem at hand. We employ the locations of material interfaces, which display a smooth dependence on the random parameters and are therefore amenable to sparse grid polynomial approximations. We showcase the capabilities of our numerical methodologies through two synthetic test cases. In particular, we show that our methodology reproduces with high accuracy multi-modal probability density functions displayed by target state variables (e.g., porosity).Comment: 25 pages, 30 figure

    A network model of conviction-driven social segregation

    Full text link
    In order to measure, predict, and prevent social segregation, it is necessary to understand the factors that cause it. While in most available descriptions space plays an essential role, one outstanding question is whether and how this phenomenon is possible in a well-mixed social network. We define and solve a simple model of segregation on networks based on discrete convictions. In our model, space does not play a role, and individuals never change their conviction, but they may choose to connect socially to other individuals based on two criteria: sharing the same conviction, and individual popularity (regardless of conviction). The trade-off between these two moves defines a parameter, analogous to the "tolerance" parameter in classical models of spatial segregation. We show numerically and analytically that this parameter determines a true phase transition (somewhat reminiscent of phase separation in a binary mixture) between a well-mixed and a segregated state. Additionally, minority convictions segregate faster and inter-specific aversion alone may lead to a segregation threshold with similar properties. Together, our results highlight the general principle that a segregation transition is possible in absence of spatial degrees of freedom, provided that conviction-based rewiring occurs on the same time scale of popularity rewirings.Comment: 11 pages, 8 figure

    Investigating biocomplexity through the agent-based paradigm.

    Get PDF
    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines--or agents--to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex

    An Epidemiological Model with Simultaneous Recoveries

    Get PDF
    Epidemiological models are an essential tool in understanding how infection spreads throughout a population. Exploring the effects of varying parameters provides insight into the driving forces of an outbreak. In this thesis, an SIS (susceptible-infectious-susceptible) model is built partnering simulation methods, differential equations, and transition matrices with the intent to describe how simultaneous recoveries influence the spread of a disease in a well-mixed population. Individuals in the model transition between only two states; an individual is either susceptible — able to be infected, or infectious — able to infect others. Events in this model (infections and recoveries) occur by way of a Poisson process. In a well-mixed population, individuals in either state interact at a constant rate where interactions have the potential to infect a susceptible individual (infection event). Recovery events, during which infectives transition from infectious to susceptible, occur at a constant rate for each infected individual. SIS models mimic the behavior of diseases that do not confer immunity to those previously infected. Examples of such diseases are the common cold, head lice, and many STIs [2]. This model describes the effects the scale of recovery events have on an outbreak. Thus, for each recovery event, k number of infectives recover. The rate at which recoveries occur is inversely proportionate to k in order to maintain the average per-capita rate of recovery. A system of ordinary differential equations (ODEs) is derived and supported by simulated data to describe the first and second moments (used to describe mean and variance) of the probability density function defining the number of infectious individuals in the population. Additionally, a Markov chain describes the process via transition matrices, which provide insight on extinctions caused by large-scale recoveries and their effect on the mean. The research shows as the values of k increase, there is a statistically significant decline in the average infection level and an increase in the standard deviation. The most extreme changes in the average infection level are observed under conditions that increase the probability of extinction. Even in small populations where the decreased infection level is not biologically significant, the results are beneficial. Because large-scale recovery events have no negative impact on average infection levels, treatment methods that may reduce costs and increase accessibility could be adopted. Healthcare professionals utilize epidemiological models to understand the severity of an outbreak and the effectiveness of treatment methods. A key feature of mathematically modeling real-world processes is the level of abstraction it offers, thus making the models applicable to many fields of study. For instance, those interested in agricultural development use these models to treat crops efficiently and optimize yield, cybersecurity experts use them to investigate computer viruses and worms, and ecologists implement them when studying seed dispersal

    Invariance of visual operations at the level of receptive fields

    Get PDF
    Receptive field profiles registered by cell recordings have shown that mammalian vision has developed receptive fields tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time. This article presents a theoretical model by which families of idealized receptive field profiles can be derived mathematically from a small set of basic assumptions that correspond to structural properties of the environment. The article also presents a theory for how basic invariance properties to variations in scale, viewing direction and relative motion can be obtained from the output of such receptive fields, using complementary selection mechanisms that operate over the output of families of receptive fields tuned to different parameters. Thereby, the theory shows how basic invariance properties of a visual system can be obtained already at the level of receptive fields, and we can explain the different shapes of receptive field profiles found in biological vision from a requirement that the visual system should be invariant to the natural types of image transformations that occur in its environment.Comment: 40 pages, 17 figure
    corecore