155 research outputs found

    Driving Markov chain Monte Carlo with a dependent random stream

    Full text link
    Markov chain Monte Carlo is a widely-used technique for generating a dependent sequence of samples from complex distributions. Conventionally, these methods require a source of independent random variates. Most implementations use pseudo-random numbers instead because generating true independent variates with a physical system is not straightforward. In this paper we show how to modify some commonly used Markov chains to use a dependent stream of random numbers in place of independent uniform variates. The resulting Markov chains have the correct invariant distribution without requiring detailed knowledge of the stream's dependencies or even its marginal distribution. As a side-effect, sometimes far fewer random numbers are required to obtain accurate results.Comment: 16 pages, 4 figure

    On some geometric optimization problems.

    Get PDF
    An optimization problem is a computational problem in which the objective is to find the best of all possible solutions. A geometric optimization problem is an optimization problem induced by a collection of geometric objects. In this thesis we study two interesting geometric optimization problems. One is the all-farthest-segments problem in which given n points in the plane, we have to report for each point the segment determined by two other points that is farthest from it. The principal motive for studying this problem was to investigate if this problem could be solved with a worst-case time-complexity that is of lower order than O(n 2), which is the time taken by the solution of Duffy et al. (13) for the all-closest version of the same problem. If h be the number of points on the convex hull of the point set, we show how to do this in O(nh + n log n) time. Our solution to this problem has also triggered off research into the hitherto unexplored problem of determining the farthest-segment Voronoi Diagram of a given set of n line segments in the plane, leading to an O(n log n) time solution for the all-farthest-segments problem (12). For the second problem, we have revisited the problem of computing an area-optimal convex polygon stabbing a set of parallel line segments studied earlier by Kumar et al. (30). The primary motive behind this was to inquire if the line of attack used for the parallel-segments version can be extended to the case where the line segments are of arbitrary orientation. We have provided a correctness proof of the algorithm, which was lacking in the above-cited version. Implementation of geometric algorithms are of great help in visualizing the algorithms, we have implemented both the algorithms and trial versions are available at www.davinci.newcs.uwindsor.ca/ ∌asishm.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .C438. Source: Masters Abstracts International, Volume: 45-01, page: 0349. Thesis (M.Sc.)--University of Windsor (Canada), 2006

    Voting: What Has Changed, What Hasn't, & Why: Research Bibliography

    Get PDF
    Since the origins of the Caltech/MIT Voting Technology Project in the fall of 2000, there has been an explosion of research and analysis on election administration and voting technology. As we worked throughout 2012 on our most recent study, Voting: What Has Changed, What Hasn’t, & What Needs Improvement, we found many more research studies. In this research bibliography, we present the research literature that we have found; future revisions of this research bibliography will update this list.Carnegie Corporation of New Yor

    Equity Prices Under Bayesian Doubt About Macroeconomic Fundamentals

    Get PDF
    I present a consumption-based explanation of a number of phenomena in the aggregate equity market. The model invokes the recursive utility function of Epstein and Zin (1989), configured with the plausible parameters of the average coefficient of the aversion to late resolution of uncertainty of about 22, and the elasticity of intertemporal substitution of 1.5. Statistically hard to discriminate in less than 80 years of data from the ubiquitous model of real consumption growth, the endowment process is specified as being subject to sporadic large shocks and incessant small shocks. The large infrequent shocks, modelled by means of a four-state hidden Markov chain, display interesting macroeconomic regularities, occuring at both the business-cycle, and a lower, frequencies. Despite the fact that the levels of endowments are observable, the source of their variation cannot be detected perfectly, facing investors with a complex signal-extraction problem. The associated posterior probabilities provide a natural link between the observed asset value fluctuations and the economic uncertainty within the rational Bayesian learning framework. Although computationally arduous, having to be solved on a high-performance computing machine in a low-level language, the model is able to account for (i) the observed magnitude of the equity premium, (ii) the low and stable risk-free rate, (iii) the magnitude and the countercyclicality of risk prices, (iv) the average levels and the procyclicality of price-dividend and wealth-consumption ratios, (v) the long-horizon predictability of risk premia, and (vi) the overreaction of price-dividend ratio to bad news in good times, all within the conceptually simple representative-agent framework.

    Structural gray matter features and behavioral preliterate skills predict future literacy – A machine learning approach

    Get PDF
    When children learn to read, their neural system undergoes major changes to become responsive to print. There seem to be nuanced interindividual differences in the neurostructural anatomy of regions that later become integral parts of the reading network. These differences might affect literacy acquisition and, in some cases, might result in developmental disorders like dyslexia. Consequently, the main objective of this longitudinal study was to investigate those interindividual differences in gray matter morphology that might facilitate or hamper future reading acquisition. We used a machine learning approach to examine to what extent gray matter macrostructural features and cognitive-linguistic skills measured before formal literacy teaching could predict literacy 2 years later. Forty-two native German-speaking children underwent T1-weighted magnetic resonance imaging and psychometric testing at the end of kindergarten. They were tested again 2 years later to assess their literacy skills. A leave-one-out cross-validated machine-learning regression approach was applied to identify the best predictors of future literacy based on cognitive-linguistic preliterate behavioral skills and cortical measures in a priori selected areas of the future reading network. With surprisingly high accuracy, future literacy was predicted, predominantly based on gray matter volume in the left occipito-temporal cortex and local gyrification in the left insular, inferior frontal, and supramarginal gyri. Furthermore, phonological awareness significantly predicted future literacy. In sum, the results indicate that the brain morphology of the large-scale reading network at a preliterate age can predict how well children learn to read

    Nonlinear Analysis of Surface EMG Signals

    Get PDF

    Artificial Intelligence and Discrimination in Health Care

    Get PDF
    Artificial intelligence (AI) holds great promise for improved health-care outcomes. It has been used to analyze tumor images, to help doctors choose among different treatment options, and to combat the COVID-19 pandemic. But AI also poses substantial new hazards. This Article focuses on a particular type of healthcare harm that has thus far evaded significant legal scrutiny. The harm is algorithmic discrimination. Algorithmic discrimination in health care occurs with surprising frequency. A well-known example is an algorithm used to identify candidates for “high risk care management” programs that routinely failed to refer racial minorities for these beneficial services. Furthermore, some algorithms deliberately adjust for race in ways that hurt minority patients. For example, according to a 2020 New England Journal of Medicine article, algorithms have regularly underestimated African Americans’ risks of kidney stones, death from heart failure, and other medical problems
    • 

    corecore