8,016 research outputs found

    Characteristics of good supervision: A multi-perspective qualitative exploration of the Masters in Public Health dissertation

    Get PDF
    Background: A dissertation is often a core component of the Masters in Public Health (MPH) qualification. This study aims to explore its purpose, from the perspective of both students and supervisors, and identify practices viewed as constituting good supervision. Methods: A multi-perspective qualitative study drawing on in-depth one-to-one interviews with MPH supervisors (n = 8) and students (n = 10), with data thematically analysed. Results: The MPH dissertation was viewed as providing generic as well as discipline-specific knowledge and skills. It provided an opportunity for in-depth study on a chosen topic but different perspectives were evident as to whether the project should be grounded in public health practice rather than academia. Good supervision practice was thought to require topic knowledge, generic supervision skills (including clear communication of expectations and timely feedback) and adaptation of supervision to meet student needs. Conclusions: Two ideal types of the MPH dissertation process were identified. Supervisor-led projects focus on achieving a clearly defined output based on a supervisor-identified research question and aspire to harmonize research and teaching practice, but often have a narrower focus. Student-led projects may facilitate greater learning opportunities and better develop skills for public health practice but could be at greater risk of course failure

    Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model

    Full text link
    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3%/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competetive accuracy with respect to state-of-the-art non-spiking neural models.Comment: 13 figures (includes supplementary information

    Mach number distribution on blade to blade surface of a turbine stator pasage

    Get PDF
    Mach number distribution on the blade to blade surface l was computed at the hub, mean and tip sections of a stator' blade using the computer program COMPBLADE. These results,; were used to plot i so-Mach contours on the blade to blade surface and surface velocity distribution as a function- of fractional surface length. The results have been presented`' in this report

    Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes

    Full text link
    Let S be a finite, ordered alphabet, and let x = x_1 x_2 ... x_n be a string over S. A "secondary index" for x answers alphabet range queries of the form: Given a range [a_l,a_r] over S, return the set I_{[a_l;a_r]} = {i |x_i \in [a_l; a_r]}. Secondary indexes are heavily used in relational databases and scientific data analysis. It is well-known that the obvious solution, storing a dictionary for the position set associated with each character, does not always give optimal query time. In this paper we give the first theoretically optimal data structure for the secondary indexing problem. In the I/O model, the amount of data read when answering a query is within a constant factor of the minimum space needed to represent I_{[a_l;a_r]}, assuming that the size of internal memory is (|S| log n)^{delta} blocks, for some constant delta > 0. The space usage of the data structure is O(n log |S|) bits in the worst case, and we further show how to bound the size of the data structure in terms of the 0-th order entropy of x. We show how to support updates achieving various time-space trade-offs. We also consider an approximate version of the basic secondary indexing problem where a query reports a superset of I_{[a_l;a_r]} containing each element not in I_{[a_l;a_r]} with probability at most epsilon, where epsilon > 0 is the false positive probability. For this problem the amount of data that needs to be read by the query algorithm is reduced to O(|I_{[a_l;a_r]}| log(1/epsilon)) bits.Comment: 16 page
    corecore