757,158 research outputs found

    Iterated filtering methods for Markov process epidemic models

    Full text link
    Dynamic epidemic models have proven valuable for public health decision makers as they provide useful insights into the understanding and prevention of infectious diseases. However, inference for these types of models can be difficult because the disease spread is typically only partially observed e.g. in form of reported incidences in given time periods. This chapter discusses how to perform likelihood-based inference for partially observed Markov epidemic models when it is relatively easy to generate samples from the Markov transmission model while the likelihood function is intractable. The first part of the chapter reviews the theoretical background of inference for partially observed Markov processes (POMP) via iterated filtering. In the second part of the chapter the performance of the method and associated practical difficulties are illustrated on two examples. In the first example a simulated outbreak data set consisting of the number of newly reported cases aggregated by week is fitted to a POMP where the underlying disease transmission model is assumed to be a simple Markovian SIR model. The second example illustrates possible model extensions such as seasonal forcing and over-dispersion in both, the transmission and observation model, which can be used, e.g., when analysing routinely collected rotavirus surveillance data. Both examples are implemented using the R-package pomp (King et al., 2016) and the code is made available online.Comment: This manuscript is a preprint of a chapter to appear in the Handbook of Infectious Disease Data Analysis, Held, L., Hens, N., O'Neill, P.D. and Wallinga, J. (Eds.). Chapman \& Hall/CRC, 2018. Please use the book for possible citations. Corrected typo in the references and modified second exampl

    An ultra-lightweight Java interpreter for bridging CS1

    Get PDF

    Point singularities and suprathreshold stochastic resonance in optimal coding

    Full text link
    Motivated by recent studies of population coding in theoretical neuroscience, we examine the optimality of a recently described form of stochastic resonance known as suprathreshold stochastic resonance, which occurs in populations of noisy threshold devices such as models of sensory neurons. Using the mutual information measure, it is shown numerically that for a random input signal, the optimal threshold distribution contains singularities. For large enough noise, this distribution consists of a single point and hence the optimal encoding is realized by the suprathreshold stochastic resonance effect. Furthermore, it is shown that a bifurcational pattern appears in the optimal threshold settings as the noise intensity increases. Fisher information is used to examine the behavior of the optimal threshold distribution as the population size approaches infinity.Comment: 11 pages, 3 figures, RevTe

    A survey of childcare and work decisions among families with children

    Get PDF

    A review of procedures to evolve quantum algorithms

    Get PDF

    Why Are Stocks So Risky?

    Get PDF
    With the decline in privately and publicly guaranteed benefits for pensions and health care, people increasingly must finance a greater share of their retirement expenses through their own savings. The relatively high long-term return on equity makes investments in stocks seem both an attractive and suitable means of accumulating the substantial wealth that savers will require. Yet, the 50 percent drop in the Standard & Poor’s 500 Index from May 2008 to March 2009 is only the latest reminder that stocks pose considerable risk for investors. In the past, equity returns over periods as long as 10 or 20 years have diverged substantially from their long-term averages, tarnishing the appeal of stocks even as investments for the long run...

    The 2011 European short sale ban on financial stocks: a cure or a curse? : [version 31 july 2013]

    Get PDF
    Did the August 2011 European short sale bans on financial stocks accomplish their goals? In order to answer this question, we use stock options’ implied volatility skews to proxy for investors’ risk aversion. We find that on ban announcement day, risk aversion levels rose for all stocks but more so for the banned financial stocks. The banned stocks’ volatility skews remained elevated during the ban but dropped for the other unbanned stocks. We show that it is the imposition of the ban itself that led to the increase in risk aversion rather than other causes such as information flow, options trading volumes, or stock specific factors. Substitution effects were minimal, as banned stocks’ put trading volumes and put-call ratios declined during the ban. We argue that although the ban succeeded in curbing further selling pressure on financial stocks by redirecting trading activity towards index options, this result came at the cost of increased risk aversion and some degree of market failure

    Applying forces to elastic network models of large biomolecules using a haptic feedback device

    Get PDF
    Elastic network models of biomolecules have proved to be relatively good at predicting global conformational changes particularly in large systems. Software that facilitates rapid and intuitive exploration of conformational change in elastic network models of large biomolecules in response to externally applied forces would therefore be of considerable use, particularly if the forces mimic those that arise in the interaction with a functional ligand. We have developed software that enables a user to apply forces to individual atoms of an elastic network model of a biomolecule through a haptic feedback device or a mouse. With a haptic feedback device the user feels the response to the applied force whilst seeing the biomolecule deform on the screen. Prior to the interactive session normal mode analysis is performed, or pre-calculated normal mode eigenvalues and eigenvectors are loaded. For large molecules this allows the memory and number of calculations to be reduced by employing the idea of the important subspace, a relatively small space of the first M lowest frequency normal mode eigenvectors within which a large proportion of the total fluctuation occurs. Using this approach it was possible to study GroEL on a standard PC as even though only 2.3% of the total number of eigenvectors could be used, they accounted for 50% of the total fluctuation. User testing has shown that the haptic version allows for much more rapid and intuitive exploration of the molecule than the mouse version

    Analysis of Spectrum Occupancy Using Machine Learning Algorithms

    Get PDF
    In this paper, we analyze the spectrum occupancy using different machine learning techniques. Both supervised techniques (naive Bayesian classifier (NBC), decision trees (DT), support vector machine (SVM), linear regression (LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to find the best technique with the highest classification accuracy (CA). A detailed comparison of the supervised and unsupervised algorithms in terms of the computational time and classification accuracy is performed. The classified occupancy status is further utilized to evaluate the probability of secondary user outage for the future time slots, which can be used by system designers to define spectrum allocation and spectrum sharing policies. Numerical results show that SVM is the best algorithm among all the supervised and unsupervised classifiers. Based on this, we proposed a new SVM algorithm by combining it with fire fly algorithm (FFA), which is shown to outperform all other algorithms.Comment: 21 pages, 6 figure
    • 

    corecore