1,599 research outputs found

    Teaching TAs To Teach: Strategies for TA Training

    Get PDF
    "The only thing that scales with undergrads is undergrads". As Computer Science course enrollments have grown, there has been a necessary increase in the number of undergraduate and graduate teaching assistants (TAs, and UTAs). TA duties often extend far beyond grading, including designing and leading lab or recitation sections, holding office hours and creating assignments. Though advanced students, TAs need proper pedagogical training to be the most effective in their roles. Training strategies have widely varied from no training at all, to semester-long prep courses. We will explore the challenges of TA training across both large and small departments. While much of the effort has focused on teams of undergraduates, most presenters have used the same tools and strategies with their graduate students. Training for TAs should not just include the mechanics of managing a classroom, but culturally relevant pedagogy. The panel will focus on the challenges of providing "just in time", and how we manage both intra-course training and department or campus led courses

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Teaching TAs To Teach: Strategies for TA Training

    Get PDF
    "The only thing that scales with undergrads is undergrads". As Computer Science course enrollments have grown, there has been a necessary increase in the number of undergraduate and graduate teaching assistants (TAs, and UTAs). TA duties often extend far beyond grading, including designing and leading lab or recitation sections, holding office hours and creating assignments. Though advanced students, TAs need proper pedagogical training to be the most effective in their roles. Training strategies have widely varied from no training at all, to semester-long prep courses. We will explore the challenges of TA training across both large and small departments. While much of the effort has focused on teams of undergraduates, most presenters have used the same tools and strategies with their graduate students. Training for TAs should not just include the mechanics of managing a classroom, but culturally relevant pedagogy. The panel will focus on the challenges of providing "just in time", and how we manage both intra-course training and department or campus led courses

    Will This Paper Increase Your h-index? Scientific Impact Prediction

    Full text link
    Scientific impact plays a central role in the evaluation of the output of scholars, departments, and institutions. A widely used measure of scientific impact is citations, with a growing body of literature focused on predicting the number of citations obtained by any given publication. The effectiveness of such predictions, however, is fundamentally limited by the power-law distribution of citations, whereby publications with few citations are extremely common and publications with many citations are relatively rare. Given this limitation, in this work we instead address a related question asked by many academic researchers in the course of writing a paper, namely: "Will this paper increase my h-index?" Using a real academic dataset with over 1.7 million authors, 2 million papers, and 8 million citation relationships from the premier online academic service ArnetMiner, we formalize a novel scientific impact prediction problem to examine several factors that can drive a paper to increase the primary author's h-index. We find that the researcher's authority on the publication topic and the venue in which the paper is published are crucial factors to the increase of the primary author's h-index, while the topic popularity and the co-authors' h-indices are of surprisingly little relevance. By leveraging relevant factors, we find a greater than 87.5% potential predictability for whether a paper will contribute to an author's h-index within five years. As a further experiment, we generate a self-prediction for this paper, estimating that there is a 76% probability that it will contribute to the h-index of the co-author with the highest current h-index in five years. We conclude that our findings on the quantification of scientific impact can help researchers to expand their influence and more effectively leverage their position of "standing on the shoulders of giants."Comment: Proc. of the 8th ACM International Conference on Web Search and Data Mining (WSDM'15
    • …
    corecore