14,476 research outputs found

    PIRLS 2011 : reading achievement in England : brief

    Get PDF

    Bounding Embeddings of VC Classes into Maximum Classes

    Full text link
    One of the earliest conjectures in computational learning theory-the Sample Compression conjecture-asserts that concept classes (equivalently set systems) admit compression schemes of size linear in their VC dimension. To-date this statement is known to be true for maximum classes---those that possess maximum cardinality for their VC dimension. The most promising approach to positively resolving the conjecture is by embedding general VC classes into maximum classes without super-linear increase to their VC dimensions, as such embeddings would extend the known compression schemes to all VC classes. We show that maximum classes can be characterised by a local-connectivity property of the graph obtained by viewing the class as a cubical complex. This geometric characterisation of maximum VC classes is applied to prove a negative embedding result which demonstrates VC-d classes that cannot be embedded in any maximum class of VC dimension lower than 2d. On the other hand, we show that every VC-d class C embeds in a VC-(d+D) maximum class where D is the deficiency of C, i.e., the difference between the cardinalities of a maximum VC-d class and of C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible results on embedding into maximum classes. For some special classes of Boolean functions, relationships with maximum classes are investigated. Finally we give a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum classes for smallest k.Comment: 22 pages, 2 figure

    Randomized Smoothing for Stochastic Optimization

    Full text link
    We analyze convergence rates of stochastic optimization procedures for non-smooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for non-smooth optimization. We give several applications of our results to statistical estimation problems, and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.Comment: 39 pages, 3 figure

    Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization

    Full text link
    Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes

    Using National Measures of Patients' Perceptions of Health Care to Design and Debrief Clinical Simulations

    Get PDF
    This article describes an innovative approach to using national measures of patients' perspectives of quality health care. Nurses from a regional simulation consortium designed and executed a simulation using the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to prepare nurses to improve care and, in turn, enhance patients' perceptions of care. The consortium is currently revising the reporting mechanism to collect data about specific learning objectives based on national quality indicator benchmarks, specifically HCAHPS. This revision reflects the changing needs of health care to include quality metrics in simulation

    Modular categories as representations of the 3-dimensional bordism 2-category

    Full text link
    We show that once-extended anomalous 3-dimensional topological quantum field theories valued in the 2-category of k-linear categories are in canonical bijection with modular tensor categories equipped with a square root of the global dimension in each factor.Comment: 71 page

    MACiE: a database of enzyme reaction mechanisms.

    Get PDF
    SUMMARY: MACiE (mechanism, annotation and classification in enzymes) is a publicly available web-based database, held in CMLReact (an XML application), that aims to help our understanding of the evolution of enzyme catalytic mechanisms and also to create a classification system which reflects the actual chemical mechanism (catalytic steps) of an enzyme reaction, not only the overall reaction. AVAILABILITY: http://www-mitchell.ch.cam.ac.uk/macie/.EPSRC (G.L.H. and J.B.O.M.), the BBSRC (G.J.B. and J.M.T.—CASE studentship in association with Roche Products Ltd; N.M.O.B. and J.B.O.M.—grant BB/C51320X/1), the Chilean Government’s Ministerio de Planificacio´n y Cooperacio´n and Cambridge Overseas Trust (D.E.A.) for funding and Unilever for supporting the Centre for Molecular Science Informatics.application note restricted to 2 printed pages web site: http://www-mitchell.ch.cam.ac.uk/macie

    Experiments with Infinite-Horizon, Policy-Gradient Estimation

    Full text link
    In this paper, we present algorithms that perform gradient ascent of the average reward in a partially observable Markov decision process (POMDP). These algorithms are based on GPOMDP, an algorithm introduced in a companion paper (Baxter and Bartlett, this volume), which computes biased estimates of the performance gradient in POMDPs. The algorithm's chief advantages are that it uses only one free parameter beta, which has a natural interpretation in terms of bias-variance trade-off, it requires no knowledge of the underlying state, and it can be applied to infinite state, control and observation spaces. We show how the gradient estimates produced by GPOMDP can be used to perform gradient ascent, both with a traditional stochastic-gradient algorithm, and with an algorithm based on conjugate-gradients that utilizes gradient information to bracket maxima in line searches. Experimental results are presented illustrating both the theoretical results of (Baxter and Bartlett, this volume) on a toy problem, and practical aspects of the algorithms on a number of more realistic problems
    corecore