386,618 research outputs found

    Pacing profiles and tactical behaviors of elite runners

    Get PDF
    The pacing behaviors used by elite athletes differ among individual sports, necessitating the study of sport-specific pacing profiles. Additionally, pacing behaviors adopted by elite runners differ depending on race distance. An “all-out” strategy, characterized by initial rapid acceleration and reduction in speed in the later stages, is observed during 100 m and 200 m events; 400 m runners also display positive pacing patterns, which is characterized by a reduction in speed throughout the race. Similarly, 800 m runners typically adopt a positive pacing strategy during paced “meet” races. However, during championship races, depending on the tactical approaches used by dominant athletes, pacing can be either positive or negative (characterized by an increase in speed throughout). A U-shaped pacing strategy (characterized by a faster start and end than during the middle part of the race) is evident during world record performances at meet races in 1500 m, mile, 5000 m, and 10,000 m events. Although a parabolic J-shaped pacing profile (in which the start is faster than the middle part of the race but is slower than the endspurt) can be observed during championship 1500 m races, a negative pacing strategy with microvariations of pace is adopted by 5000 m and 10,000 m runners in championship races. Major cross country and marathon championship races are characterized by a positive pacing strategy; whereas a U-shaped pacing strategy, which is the result of a fast endspurt, is adopted by 3000 m steeplechasers and half marathoners. In contrast, recent world record marathon performances have been characterized by even pacing, which emphasizes the differences between championship and meet races at distances longer than 800 m. Studies reviewed suggest further recommendations for athletes. Throughout the whole race, 800 m runners should avoid running wide on the bends. In turn, during major championship events, 1500 m, 5000 m, and 10,000 m runners should try to run close to the inside of the track as much as possible during the decisive stages of the race when the speed is high. Staying within the leading positions during the last lap is recommended to optimize finishing position during 1500 m and 5000 m major championship races. Athletes with more modest aims than winning a medal at major championships are advised to adopt a realistic pace during the initial stages of long-distance races and stay within a pack of runners. Coaches of elite athletes should take into account the observed difference in pacing profiles adopted in meet races vs. those used in championship races: fast times achieved during races with the help of 1 or more pacemakers are not necessarily replicated in winner-takes-all championship races, where pace varies substantially. Although existing studies examining pacing characteristics in elite runners through an observational approach provide highly ecologically valid performance data, they provide little information regarding the underpinning mechanisms that explain the behaviors shown. Therefore, further research is needed in order to make a meaningful impact on the discipline. Researchers should design and conduct interventions that enable athletes to carefully choose strategies that are not influenced by poor decisions made by other competitors, allowing these athletes to develop more optimal and successful behaviors

    Science Concierge: A fast content-based recommendation system for scientific publications

    Full text link
    Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.Comment: 12 pages, 5 figure

    Analysis of Different Types of Regret in Continuous Noisy Optimization

    Get PDF
    The performance measure of an algorithm is a crucial part of its analysis. The performance can be determined by the study on the convergence rate of the algorithm in question. It is necessary to study some (hopefully convergent) sequence that will measure how "good" is the approximated optimum compared to the real optimum. The concept of Regret is widely used in the bandit literature for assessing the performance of an algorithm. The same concept is also used in the framework of optimization algorithms, sometimes under other names or without a specific name. And the numerical evaluation of convergence rate of noisy algorithms often involves approximations of regrets. We discuss here two types of approximations of Simple Regret used in practice for the evaluation of algorithms for noisy optimization. We use specific algorithms of different nature and the noisy sphere function to show the following results. The approximation of Simple Regret, termed here Approximate Simple Regret, used in some optimization testbeds, fails to estimate the Simple Regret convergence rate. We also discuss a recent new approximation of Simple Regret, that we term Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016, Denver, United States. 201

    The SECURE collaboration model

    Get PDF
    The SECURE project has shown how trust can be made computationally tractable while retaining a reasonable connection with human and social notions of trust. SECURE has produced a well-founded theory of trust that has been tested and refined through use in real software such as collaborative spam filtering and electronic purse. The software comprises the SECURE kernel with extensions for policy specification by application developers. It has yet to be applied to large-scale, multi-domain distributed systems taking different application contexts into account. The project has not considered privacy in evidence distribution, a crucial issue for many application domains, including public services such as healthcare and police. The SECURE collaboration model has similarities with the trust domain concept, embodying the interaction set of a principal, but SECURE is primarily concerned with pseudonymous entities rather than domain-structured systems

    Matrix Completion on Graphs

    Get PDF
    The problem of finding the missing values of a matrix given a few of its entries, called matrix completion, has gathered a lot of attention in the recent years. Although the problem under the standard low rank assumption is NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number of observed entries is sufficiently large. In this work, we introduce a novel matrix completion model that makes use of proximity information about rows and columns by assuming they form communities. This assumption makes sense in several real-world problems like in recommender systems, where there are communities of people sharing preferences, while products form clusters that receive similar ratings. Our main goal is thus to find a low-rank solution that is structured by the proximities of rows and columns encoded by graphs. We borrow ideas from manifold learning to constrain our solution to be smooth on these graphs, in order to implicitly force row and column proximities. Our matrix recovery model is formulated as a convex non-smooth optimization problem, for which a well-posed iterative scheme is provided. We study and evaluate the proposed matrix completion on synthetic and real data, showing that the proposed structured low-rank recovery model outperforms the standard matrix completion model in many situations.Comment: Version of NIPS 2014 workshop "Out of the Box: Robustness in High Dimension
    • 

    corecore