9 research outputs found
Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime
The synthesis of classical Computational Complexity Theory with Recursive
Analysis provides a quantitative foundation to reliable numerics. Here the
operators of maximization, integration, and solving ordinary differential
equations are known to map (even high-order differentiable) polynomial-time
computable functions to instances which are `hard' for classical complexity
classes NP, #P, and CH; but, restricted to analytic functions, map
polynomial-time computable ones to polynomial-time computable ones --
non-uniformly!
We investigate the uniform parameterized complexity of the above operators in
the setting of Weihrauch's TTE and its second-order extension due to
Kawamura&Cook (2010). That is, we explore which (both continuous and discrete,
first and second order) information and parameters on some given f is
sufficient to obtain similar data on Max(f) and int(f); and within what running
time, in terms of these parameters and the guaranteed output precision 2^(-n).
It turns out that Gevrey's hierarchy of functions climbing from analytic to
smooth corresponds to the computational complexity of maximization growing from
polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete)
Computation, Hard Analysis, and Information-Based Complexity
Computational Complexity of Smooth Differential Equations
The computational complexity of the solutions to the ordinary
differential equation , under various assumptions
on the function has been investigated. Kawamura showed in 2010 that the
solution can be PSPACE-hard even if is assumed to be Lipschitz
continuous and polynomial-time computable. We place further requirements on the
smoothness of and obtain the following results: the solution can still
be PSPACE-hard if is assumed to be of class ; for each , the
solution can be hard for the counting hierarchy even if is of class
.Comment: 15 pages, 3 figure
Heart failure with preserved ejection fraction according to the HFA-PEFF score in COVID-19 patients: clinical correlates and echocardiographic findings
Aims: Viral-induced cardiac inflammation can induce heart failure with preserved ejection fraction (HFpEF)-like syndromes. COVID-19 can lead to myocardial damage and vascular injury. We hypothesised that COVID-19 patients frequently develop a HFpEF-like syndrome, and designed this study to explore this.
Methods and results: Cardiac function was assessed in 64 consecutive, hospitalized, and clinically stable COVID-19 patients from April-November 2020 with left ventricular ejection fraction (LVEF) ≥50% (age 56 ± 19 years, females: 31%, severe COVID-19 disease: 69%). To investigate likelihood of HFpEF presence, we used the HFA-PEFF score. A low (0-1 points), intermediate (2-4 points), and high (5-6 points) HFA-PEFF score was observed in 42%, 33%, and 25% of patients, respectively. In comparison, 64 subjects of similar age, sex, and comorbidity status without COVID-19 showed these scores in 30%, 66%, and 4%, respectively (between groups: P = 0.0002). High HFA-PEFF scores were more frequent in COVID-19 patients than controls (25% vs. 4%, P = 0.001). In COVID-19 patients, the HFA-PEFF score significantly correlated with age, estimated glomerular filtration rate, high-sensitivity troponin T (hsTnT), haemoglobin, QTc interval, LVEF, mitral E/A ratio, and H2 FPEF score (all P < 0.05). In multivariate, ordinal regression analyses, higher age and hsTnT were significant predictors of increased HFA-PEFF scores. Patients with myocardial injury (hsTnT ≥14 ng/L: 31%) vs. patients without myocardial injury, showed higher HFA-PEFF scores [median 5 (interquartile range 3-6) vs. 1 (0-3), P < 0.001] and more often showed left ventricular diastolic dysfunction (75% vs. 27%, P < 0.001).
Conclusion: Hospitalized COVID-19 patients frequently show high likelihood of presence of HFpEF that is associated with cardiac structural and functional alterations, and myocardial injury. Detailed cardiac assessments including echocardiographic determination of left ventricular diastolic function and biomarkers should become routine in the care of hospitalized COVID-19 patients
Closed Sets and Operators thereon: Representations, Computability and Complexity
The TTE approach to Computable Analysis is the study of so-called
representations (encodings for continuous objects such as reals, functions, and
sets) with respect to the notions of computability they induce. A rich variety
of such representations had been devised over the past decades, particularly
regarding closed subsets of Euclidean space plus subclasses thereof (like
compact subsets). In addition, they had been compared and classified with
respect to both non-uniform computability of single sets and uniform
computability of operators on sets. In this paper we refine these
investigations from the point of view of computational complexity. Benefiting
from the concept of second-order representations and complexity recently
devised by Kawamura & Cook (2012), we determine parameterized complexity bounds
for operators such as union, intersection, projection, and more generally
function image and inversion. By indicating natural parameters in addition to
the output precision, we get a uniform view on results by Ko (1991-2013),
Braverman (2004/05) and Zhao & M\"uller (2008), relating these problems to the
P/UP/NP question in discrete complexity theory
Closed Sets and Operators thereon: Representations, Computability and Complexity
The TTE approach to Computable Analysis is the study of so-calledrepresentations (encodings for continuous objects such as reals, functions, andsets) with respect to the notions of computability they induce. A rich varietyof such representations had been devised over the past decades, particularlyregarding closed subsets of Euclidean space plus subclasses thereof (likecompact subsets). In addition, they had been compared and classified withrespect to both non-uniform computability of single sets and uniformcomputability of operators on sets. In this paper we refine theseinvestigations from the point of view of computational complexity. Benefitingfrom the concept of second-order representations and complexity recentlydevised by Kawamura & Cook (2012), we determine parameterized complexity boundsfor operators such as union, intersection, projection, and more generallyfunction image and inversion. By indicating natural parameters in addition tothe output precision, we get a uniform view on results by Ko (1991-2013),Braverman (2004/05) and Zhao & M\"uller (2008), relating these problems to theP/UP/NP question in discrete complexity theory
Computational benefit of smoothness: Parameterized bit-complexity of numerical operators on analytic functions and Gevrey’s hierarchy
AbstractThe synthesis of (discrete) Complexity Theory with Recursive Analysis provides a quantitative algorithmic foundation to calculations over real numbers, sequences, and functions by approximation up to prescribable absolute error 1/2n (roughly corresponding to n binary digits after the radix point). In this sense Friedman and Ko have shown the seemingly simple operators of maximization and integration ‘complete’ for the standard complexity classes NP and #P — even when restricted to smooth (=C∞) arguments. Analytic polynomial-time computable functions on the other hand are known to get mapped to polynomial-time computable functions: non-uniformly, that is, disregarding dependences other than on the output precision n.The present work investigates the uniform parameterized complexity of natural operators Λ on subclasses of smooth functions: evaluation, pointwise addition and multiplication, (iterated) differentiation, integration, and maximization. We identify natural integer parameters k=k(f) which, when given as enrichment to approximations to the function argument f, permit to computably produce approximations to Λ(f); and we explore the asymptotic worst-case running time sufficient and necessary for such computations in terms of the output precision n and said k.It turns out that Maurice Gevrey’s 1918 classical hierarchy climbing from analytic to (just below) smooth functions provides for a quantitative gauge of the uniform computational complexity of maximization and integration that, non-uniformly, exhibits the phase transition from tractable (i.e. polynomial-time) to intractable (in the sense of NP-‘hardness’). Our proof methods involve Hard Analysis, Approximation Theory, and an adaptation of Information-Based Complexity to the bit model
Computational Complexity of Smooth Differential Equations
The computational complexity of the solutions to the ordinary
differential equation , under various assumptions
on the function has been investigated. Kawamura showed in 2010 that the
solution can be PSPACE-hard even if is assumed to be Lipschitz
continuous and polynomial-time computable. We place further requirements on the
smoothness of and obtain the following results: the solution can still
be PSPACE-hard if is assumed to be of class ; for each , the
solution can be hard for the counting hierarchy even if is of class