290,591 research outputs found

    Revisiting the Complexity of Stability of Continuous and Hybrid Systems

    Full text link
    We develop a framework to give upper bounds on the "practical" computational complexity of stability problems for a wide range of nonlinear continuous and hybrid systems. To do so, we describe stability properties of dynamical systems using first-order formulas over the real numbers, and reduce stability problems to the delta-decision problems of these formulas. The framework allows us to obtain a precise characterization of the complexity of different notions of stability for nonlinear continuous and hybrid systems. We prove that bounded versions of the stability problems are generally decidable, and give upper bounds on their complexity. The unbounded versions are generally undecidable, for which we give upper bounds on their degrees of unsolvability

    Computational models of hemostasis: Degrees of complexity

    Get PDF
    The history of studies on blood clotting goes back to the emergence of civilized society itself. The foundations of the modern scientific study of hemostasis are based on the discovery of erythrocytes in blood in 1674 and, later, that of platelets in 1842. The causes of thrombosis are encapsulated in the Virchow Triad (dated to 1856), which refers, in modern terms, to hypercoagulability, alterations of hemodynamics (stasis), and endothelial injury. The understanding of coagulation, the network of reactions that underlies hemostasis and thrombosis, has evolved from a cascade (in 1964) into spatially distinct sets of reactions dependent on co-factors occurring on different cells in different tissues and linked together by diffusion and flow (as of 2015). Correspondingly, mathematical/computational models for hemostasis and thrombosis (which involve coagulation along with platelet aggregation in the presence of flow) have evolved in design complexity from Continuum temporal (or “homogeneous”) models to Continuum spatio-temporal models (with or without the flow) and lately into Discrete-Continuum spatio-temporal models with the flow. After a brief listing of the discoveries and historical personae that contributed to the understanding of hemostasis up to the present, the development of mathematical/computational models is traced from the late 1980s when they started gaining importance. Influential models are then highlighted. The models are reviewed in increasing order of design complexity (one of four possible methods of classification). The physiological significance of each and the insights they offer into hemostasis regulation are explained. © 2022 The Author

    Physical portrayal of computational complexity

    Get PDF
    Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation, the computational complexity class P is a subset of NP.Comment: 16, pages, 7 figure

    Efficient regularized isotonic regression with application to gene--gene interaction search

    Full text link
    Isotonic regression is a nonparametric approach for fitting monotonic models to data that has been widely studied from both theoretical and practical perspectives. However, this approach encounters computational and statistical overfitting issues in higher dimensions. To address both concerns, we present an algorithm, which we term Isotonic Recursive Partitioning (IRP), for isotonic regression based on recursively partitioning the covariate space through solution of progressively smaller "best cut" subproblems. This creates a regularized sequence of isotonic models of increasing model complexity that converges to the global isotonic regression solution. The models along the sequence are often more accurate than the unregularized isotonic regression model because of the complexity control they offer. We quantify this complexity control through estimation of degrees of freedom along the path. Success of the regularized models in prediction and IRPs favorable computational properties are demonstrated through a series of simulated and real data experiments. We discuss application of IRP to the problem of searching for gene--gene interactions and epistasis, and demonstrate it on data from genome-wide association studies of three common diseases.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS504 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore