129 research outputs found

    Turing machines can be efficiently simulated by the General Purpose Analog Computer

    Full text link
    The Church-Turing thesis states that any sufficiently powerful computational model which captures the notion of algorithm is computationally equivalent to the Turing machine. This equivalence usually holds both at a computability level and at a computational complexity level modulo polynomial reductions. However, the situation is less clear in what concerns models of computation using real numbers, and no analog of the Church-Turing thesis exists for this case. Recently it was shown that some models of computation with real numbers were equivalent from a computability perspective. In particular it was shown that Shannon's General Purpose Analog Computer (GPAC) is equivalent to Computable Analysis. However, little is known about what happens at a computational complexity level. In this paper we shed some light on the connections between this two models, from a computational complexity level, by showing that, modulo polynomial reductions, computations of Turing machines can be simulated by GPACs, without the need of using more (space) resources than those used in the original Turing computation, as long as we are talking about bounded computations. In other words, computations done by the GPAC are as space-efficient as computations done in the context of Computable Analysis

    The general purpose analog computer and computable analysis are two equivalent paradigms of analog computation

    Get PDF
    In this paper we revisit one of the rst models of analog computation, Shannon's General Purpose Analog Computer (GPAC). The GPAC has often been argued to be weaker than computable analysis. As main contribution, we show that if we change the notion of GPACcomputability in a natural way, we compute exactly all real computable functions (in the sense of computable analysis). Moreover, since GPACs are equivalent to systems of polynomial di erential equations then we show that all real computable functions can be de ned by such models

    Solving analytic differential equations in polynomial time over unbounded domains

    Get PDF
    In this paper we consider the computational complexity of solving initial-value problems de ned with analytic ordinary diferential equations (ODEs) over unbounded domains of Rn and Cn, under the Computable Analysis setting. We show that the solution can be computed in polynomial time over its maximal interval of de nition, provided it satis es a very generous bound on its growth, and that the function admits an analytic extension to the complex plane

    Novel community data in ecology-properties and prospects

    Get PDF
    New technologies for monitoring biodiversity such as environmental (e)DNA, passive acoustic monitoring, and optical sensors promise to generate automated spatiotemporal community observations at unprecedented scales and resolutions. Here, we introduce ‘novel community data’ as an umbrella term for these data. We review the emerging field around novel community data, focusing on new ecological questions that could be addressed; the analytical tools available or needed to make best use of these data; and the potential implications of these developments for policy and conservation. We conclude that novel community data offer many opportunities to advance our understanding of fundamental ecological processes, including community assembly, biotic interactions, micro- and macroevolution, and overall ecosystem functioning

    ARIA 2016: Care pathways implementing emerging technologies for predictive medicine in rhinitis and asthma across the life cycle

    Get PDF
    The Allergic Rhinitis and its Impact on Asthma (ARIA) initiative commenced during a World Health Organization workshop in 1999. The initial goals were (1) to propose a new allergic rhinitis classification, (2) to promote the concept of multi-morbidity in asthma a

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
    corecore