13 research outputs found

    An analysis of the Grünwald–Letnikov scheme for initial-value problems with weakly singular solutions

    Get PDF
    A convergence analysis is given for the Grünwald–Letnikov discretisation of a Riemann–Liouville fractional initial-value problem on a uniform mesh tm=mτ with m=0,1,…,M. For given smooth data, the unknown solution of the problem will usually have a weak singularity at the initial time t=0. Our analysis is the first to prove a convergence result for this method while assuming such non-smooth behaviour in the unknown solution. In part our study imitates previous analyses of the L1 discretisation of such problems, but the introduction of some additional ideas enables exact formulas for the stability multipliers in the Grünwald–Letnikov analysis to be obtained (the earlier L1 analyses yielded only estimates of their stability multipliers). Armed with this information, it is shown that the solution computed by the Grünwald–Letnikov scheme is O(τtmα−1) at each mesh point tm; hence the scheme is globally only O(τα) accurate, but it is O(τ) accurate for mesh points tm that are bounded away from t=0. Numerical results for a test example show that these theoretical results are sharp

    Abel's limit theorem, its converse, and multiplication formulae for Γ(x)

    Get PDF
    Abel's well-known limit theorem for power series, and its corrected converse due to J. E. Littlewood, form the basis for a general identity that is presented here, which is shown to be equivalent to Gauss's multiplication theorem for the Gamma function

    A proof, a consequence and an application of Boole's combinatorial identity

    Get PDF
    Boole's combinatorial identity is proved, and a consequence of it for analytic functions is derived that is used to evaluate a sequence of integrals in terms of Euler's secant sequence of integers

    Moser's Inequality for a class of integral operators

    Get PDF
    Let 1 0. Moser’s Inequality states that there is a constant C p such that s u p a ≤ 1 s u p f ∈ B p ʃ 0 ∞ e x p [ a x q | F ( x ) | q - x ] d x = C p where B p is the unit ball of L p . Moreover, the value a = 1 is sharp. We observe that F = K 1 f where the integral operator K 1 has a simple kernel K. We consider the question of for what kernels K(t,x), 0 ≤ t, x < ∞, this result can be extended, and proceed to discuss this when K is non-negative and homogeneous of degree -1. A sufficient condition on K is found for the analogue of Moser’s Inequality to hold. An internal constant ψ, the counterpart of the constant a, arises naturally. We give a condition on K that ψ be sharp. Some applications are discussed

    Likelihood ratio tests for equality of shape under varying degrees of orientation invariance

    No full text
    We consider a problem from image cytometry where the objective is to describe possible changes in the shape and orientation of cellular nuclei after treatment with a toxin. The shapes of nuclei are represented by individual ellipses. It is argued that the shape comparison problem can be formulated as a generalization of a hypothesis test for the equality of covariance matrices. For many cell types, the test statistic should be invariant with respect to orientations of the cells. For other cell types, the test statistic should be equivariant with respect to orientations of the cells, but invariant with respect to orientations of the images. Likelihood ratio tests (LRTs) are derived under a Wishart model. The likelihood maximization uses a new result about the minimization of the determinant of a sum of matrices under individual rotations. The applicability and limitations of these LRTs are demonstrated by means of simulation experiments. The reference distributions of the test statistics under the null hypothesis are obtained using unrestricted and restricted randomization procedures. Justification for the Wishart model is provided using a residual diagnostic method. The scientific implications of the results are considered.57N25 47N60 58J70 Shape analysis Likelihood ratio test Orientation invariance

    Frailty assessment and acute frailty service provision in the UK: results of a national ‘day of care’ survey

    Get PDF
    Background: The incorporation of acute frailty services into the acute care pathway is increasingly common. The prevalence and impact of acute frailty services in the UK are currently unclear. Methods: The Society for Acute Medicine Benchmarking Audit (SAMBA) is a day of care survey undertaken annually within the UK. SAMBA 2019 (SAMBA19) took place on Thursday 27th June 2019. A questionnaire was used to collect hospital and patient-level data on the structure and organisation of acute care delivery. SAMBA19 sought to establish the frequency of frailty assessment tool use and describe acute frailty services nationally. Hospitals were classified based on the presence of acute frailty services and metrics of performance compared. Results: A total of 3218 patients aged ≥70 admitted to 129 hospitals were recorded in SAMBA19. The use of frailty assessment tools was reported in 80 (62.0%) hospitals. The proportion of patients assessed for the presence of frailty in individual hospitals ranged from 2.2 to 100%. Bedded Acute Frailty Units were reported in 65 (50.3%) hospitals. There was significant variation in admission rates between hospitals. This was not explained by the presence of a frailty screening policy or presence of a dedicated frailty unit. Conclusion: Two fifths of participating UK hospitals did not have a routine frailty screening policy: where this existed, rates of assessment for frailty were variable and most at-risk patients were not assessed. Responses to positive results were poorly defined. The provision of acute frailty services is variable throughout the UK. Improvement is needed for the aspirations of national policy to be fully realised

    Reward-Respecting Subtasks for Model-Based Reinforcement Learning (Abstract Reprint)

    No full text
    To achieve the ambitious goals of artificial intelligence, reinforcement learning must include planning with a model of the world that is abstract in state and time. Deep learning has made progress with state abstraction, but temporal abstraction has rarely been used, despite extensively developed theory based on the options framework. One reason for this is that the space of possible options is immense, and the methods previously proposed for option discovery do not take into account how the option models will be used in planning. Options are typically discovered by posing subsidiary tasks, such as reaching a bottleneck state or maximizing the cumulative sum of a sensory signal other than reward. Each subtask is solved to produce an option, and then a model of the option is learned and made available to the planning process. In most previous work, the subtasks ignore the reward on the original problem, whereas we propose subtasks that use the original reward plus a bonus based on a feature of the state at the time the option terminates. We show that option models obtained from such reward-respecting subtasks are much more likely to be useful in planning than eigenoptions, shortest path options based on bottleneck states, or reward-respecting options generated by the option-critic. Reward respecting subtasks strongly constrain the space of options and thereby also provide a partial solution to the problem of option discovery. Finally, we show how values, policies, options, and models can all be learned online and off-policy using standard algorithms and general value functions

    Reward-Respecting Subtasks for Model-Based Reinforcement Learning

    Full text link
    To achieve the ambitious goals of artificial intelligence, reinforcement learning must include planning with a model of the world that is abstract in state and time. Deep learning has made progress in state abstraction, but, although the theory of time abstraction has been extensively developed based on the options framework, in practice options have rarely been used in planning. One reason for this is that the space of possible options is immense and the methods previously proposed for option discovery do not take into account how the option models will be used in planning. Options are typically discovered by posing subsidiary tasks such as reaching a bottleneck state, or maximizing a sensory signal other than the reward. Each subtask is solved to produce an option, and then a model of the option is learned and made available to the planning process. The subtasks proposed in most previous work ignore the reward on the original problem, whereas we propose subtasks that use the original reward plus a bonus based on a feature of the state at the time the option stops. We show that options and option models obtained from such reward-respecting subtasks are much more likely to be useful in planning and can be learned online and off-policy using existing learning algorithms. Reward respecting subtasks strongly constrain the space of options and thereby also provide a partial solution to the problem of option discovery. Finally, we show how the algorithms for learning values, policies, options, and models can be unified using general value functions
    corecore