42 research outputs found

    VALIDATION OF LIGHTSAY LEG POWER FORMULA

    Get PDF
    The purpose of this study was to compare the results obtained through Lightsey's formula for the calculation of approximate leg power from height data measured fro filmed standing vertical jump performances with those obtained from precise force-platform measures of the same performances

    FLIGHT AS A MEASURE OF LEG POWER

    Get PDF
    A basic motor ability involved in many different motor tasks is anaerobic muscular power. Defined as the ability to rapidly generate and apply large amounts of force and thereby impart high velocity to the body, its segments and/or external objects, this ability is involved in the successful performance of virtually all running, jumping and throwing events for which muscle strength and speed are important

    PaLM: Scaling Language Modeling with Pathways

    Full text link
    Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies

    PaLM 2 Technical Report

    Full text link
    We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report

    The present and future status of heavy neutral leptons

    Get PDF
    The existence of nonzero neutrino masses points to the likely existence of multiple Standard Model neutral fermions. When such states are heavy enough that they cannot be produced in oscillations, they are referred to as heavy neutral leptons (HNLs). In this white paper, we discuss the present experimental status of HNLs including colliders, beta decay, accelerators, as well as astrophysical and cosmological impacts. We discuss the importance of continuing to search for HNLs, and its potential impact on our understanding of key fundamental questions, and additionally we outline the future prospects for next-generation future experiments or upcoming accelerator run scenarios.Peer reviewe

    Developing the International Manager

    No full text

    The quest for the international manager

    No full text

    Being an international manager

    No full text
    Stefan Wills and Kevin Barham base this article on a research report which involved interviewing around 60 senior international executives in companies from a range of different countries and industries. A qualitative analysis of the findings revealed that it can be misleading to totally attribute their success to specific behaviour competencies or skills. In addition to these, such people appear to be operating from a deeper, core competence which is essentially holistic in nature. This is outlined and described as three major inter-linking parts; cognitive complexity, emotional energy and psychological maturity. They make up the essence of what it is to be a successful international manager in the complex world of global business.
    corecore