58 research outputs found

    New Absolute Fast Converging Phylogeny Estimation Methods with Improved Scalability and Accuracy

    Get PDF
    Absolute fast converging (AFC) phylogeny estimation methods are ones that have been proven to recover the true tree with high probability given sequences whose lengths are polynomial in the number of number of leaves in the tree (once the shortest and longest branch lengths are fixed). While there has been a large literature on AFC methods, the best in terms of empirical performance was DCM_NJ, published in SODA 2001. The main empirical advantage of DCM_NJ over other AFC methods is its use of neighbor joining (NJ) to construct trees on smaller taxon subsets, which are then combined into a tree on the full set of species using a supertree method; in contrast, the other AFC methods in essence depend on quartet trees that are computed independently of each other, which reduces accuracy compared to neighbor joining. However, DCM_NJ is unlikely to scale to large datasets due to its reliance on supertree methods, as no current supertree methods are able to scale to large datasets with high accuracy. In this study we present a new approach to large-scale phylogeny estimation that shares some of the features of DCM_NJ but bypasses the use of supertree methods. We prove that this new approach is AFC and uses polynomial time. Furthermore, we describe variations on this basic approach that can be used with leaf-disjoint constraint trees (computed using methods such as maximum likelihood) to produce other AFC methods that are likely to provide even better accuracy. Thus, we present a new generalizable technique for large-scale tree estimation that is designed to improve scalability for phylogeny estimation methods to ultra-large datasets, and that can be used in a variety of settings (including tree estimation from unaligned sequences, and species tree estimation from gene trees)

    Two-Year Injury Incidence and Movement Characteristics Among Division-I Cross-Country Athletes

    Get PDF
    International Journal of Exercise Science 16(1): 159-171, 2023. While research on running injuries is common, there is a lack of definitive causal relationships between running injuries and gait mechanics. Additionally, there is a paucity of longitudinal research to understand the development of running injuries. The purpose of this study was to assess the incidence of running injuries and investigate movement characteristics as they relate to injury development in Division-I cross-country athletes over a two-year period. Athletes were evaluated at pre- and post-season with three-dimensional kinematic and kinetic gait analyses. A total of 17 female athletes were evaluated, though sample size varied at each time point. Self-reported injury occurrence data was collected via questionnaires and injury reports were obtained from athletic training staff. Sixteen of the athletes reported at least one injury during the study. The percentage of participants self-reporting injury was greater than the percentage of participants who were evaluated and diagnosed by medical staff each year (year one: 67% vs. 33%; year two: 70% vs. 50%). The most common self-reported and medically confirmed injury location was the left foot, with 7 total reports out of 17 participants. Inferential statistics were not feasible due to an inherently limited sample size, thus effect size (Cohen’s ds) was used to assess differences in mechanics between athletes with and without left foot injury. Several variables, including peak ankle plantarflexion, dorsiflexion, and inversion, peak knee abduction, and hip abduction and adduction were associated with moderate-to-large effect sizes (ds \u3e 0.50). This study demonstrates that injury rates in the literature may be influenced by reporting method. Additionally, this study provides promising information regarding movement characteristics in injured runners and demonstrates the necessity of longitudinal studies of homogenous groups

    Is Body Composition or Body Mass Index Associated with the Step Count Accuracy of a Wearable Technology Device?

    Full text link
    Topics in Exercise Science and Kinesiology Volume 3: Issue 1, Article 5, 2022. A simple way to gauge daily physical activity levels is to use a wearable technology device to count the number of steps taken during the day. However, it is unknown whether these devices return accurate step counts for persons with different body fat percentages or body mass index scores. The purpose was to determine if there is a correlation between either body fat percentages and/or body mass index values and the percent error calculated between a manual step count and values recorded by a wearable technology device. Forty volunteers participated. The Samsung Gear 2, FitBit Surge, Polar A360, Garmin Vivosmart HR+, and the Leaf Health Tracker were evaluated when walking and jogging in free motion and treadmill conditions. All devices were worn simultaneously in randomized configurations. The mean of two manual steps counters was used as the criterion measure. Walking and jogging free motion and treadmill protocols of 5-minute intervals were completed. Correlation was determined by Spearman’s rank correlation coefficient. Significance was set at \u3c0.05. There were no significant correlations for body mass index vs percent error. For body fat, significant positive correlations were observed for the Samsung Gear 2 free motion walk: (r=0.321, p=0.043), Garmin Vivosmart HR+ free motion walk: (r=0.488, p=\u3c0.001), and the Leaf Health Tracker treadmill walk: (r=0.368, p=0.020) and treadmill jog: (r=0.350, p=0.027). Body fat may have a limited association with a device’s step count percent error. Lower body mechanics along with device placement may be more of a factor in step counting accuracy

    Physically Interactive Games Increase VO2 Above Resting Metabolic Rate

    Full text link
    The purpose of this study was to determine the energy cost, beyond resting metabolic rate (RMR), of playing select games on the Nintendo Wii for 30 contiguous minutes. Physically interactive games (i.e. Basic Run and Basic Step) increase energy expenditure above resting values compared to a sedentary game (Tanks!) and therefore may help individual’s become more active. Furthermore, Basic Run and Basic Step elicited MET values of 3.9 and 3.2, respectively, which is considered moderate-intensity exercise and could be used to meet daily recommendations for physical activity

    Interactive Video Gaming: Do We Feel Like We Are Exercising?

    Full text link
    The primary purpose of this study was to determine if the rating of perceived exertion (RPE) and hedonics (liking or enjoyment) changed during 30 contiguous minutes of playing select, interactive video games on the Nintendo Wii system. A secondary purpose was to determine if RPE and liking differed among games.These data suggest that individuals do perceive difference in the amount of work they are performing during extended play of the same game or among sedentary and physically interactive games. Additionally, liking was similar during extended game play and among games suggesting that the physical interaction with the game may be superseded by interest in the game. Promoting the use of physically interactive gaming may be useful in helping individuals meet their daily recommendations for physical activity owing to their enjoyment which minimizes the perception of being physically demanding

    A Comparison of Multiple Wearable Technology Devices Heart Rate and Step Count Measurements During Free Motion and Treadmill Based Measurements

    Get PDF
    Introduction: Wearable Technology Devices are used to promote physical activity. It is unknown whether different devices measure heart rate and step count consistently during walking or jogging in a free motion setting and on a treadmill. Purpose: To compare heart rate and step count values for the Samsung Gear 2, FitBit Surge, Polar A360, Garmin Vivosmart HR+, Scosche Rhythm+ and the Leaf Health Tracker in walking and jogging activities. Methods: Forty volunteers participated. Devices were worn simultaneously in randomized configurations. 5-minute intervals of walking and jogging were completed in free motion and treadmill settings with matching paces. Heart rates at minutes 3, 4, and 5 were averaged for the devices along with the criterion measure, the Polar T31 monitor. Step count criterion measure was the mean of two manual counters. A 2x6 (environment vs device) repeated measures ANOVA with Bonferroni post-hoc was performed with significance set at p<0.05. Results: There was no significant interaction or main effects for walking heart rate. Jogging heart rate saw significant environment and device main effects. Walking step count had a significant interaction between the devices and the environment. Jogging step count had a significant device main effect. Conclusions: There may be some conditions such as heart rate measurements taken while walking or step count measurements taken while jogging/running that may only require treadmill-based validity testing

    Step Count Reliability and Validity of Five Wearable Technology Devices While Walking and Jogging in both a Free Motion Setting and on a Treadmill

    Get PDF
    Wearable technology devices are used by millions of people who use daily step counts to promote healthy lifestyles. However, the accuracy of many of these devices has not been determined. The purpose was to determine reliability and validity of the Samsung Gear 2, FitBit Surge, Polar A360, Garmin Vivosmart HR+, and the Leaf Health Tracker when walking and jogging in free motion and treadmill conditions. Forty volunteers completed walking and jogging free motion and treadmill protocols of 5-minute intervals. The devices were worn simultaneously in randomized configurations. The mean of two manual steps counters was used as the criterion measure. Test-retest reliability was determined via Intraclass Correlation Coefficient (ICC). Validity was determined via a combination of Pearson’s Correlation Coefficient, mean absolute percent error (MAPE: free motion ≤ 10.0%, treadmill ≤ 5.00%), and Bland-Altman analysis (device bias and limits of agreement). Significance was set at p\u3c 0.05. The Samsung Gear 2 was deemed to be both reliable and valid for the jogging conditions, but not walking. The Fitbit Surge was reliable and valid for all conditions except for treadmill walking (deemed reliable, ICC = 0.76; but not valid). The Polar A360 was found to be reliable for one condition (treadmill jog ICC = 0.78), but not valid for any condition. The Garmin Vivosmart HR+ and Leaf Health Tracker were found to be both reliable and valid for all situations. While each device returned some level of consistency and accuracy during either free motion or treadmill exercises, the Garmin Vivosmart HR+ and the Leaf Health Tracker were deemed to be reliable and valid for all conditions tested

    Oral versus Nasal Breathing during Moderate to High Intensity Submaximal Aerobic Exercise

    Get PDF
    Introduction: When comparing oral breathing versus nasal breathing, a greater volume of air can be transported through the oral passageway but nasal breathing may also have benefits at submaximal exercise intensities. Purpose: The purpose of this study was to determine breathing efficiency during increasing levels of submaximal aerobic exercise. Methods: Nineteen individuals (males N=9, females N=10) completed a test for maximal oxygen consumption (VO2max) and on separate days 4-min treadmill runs at increasing submaximal intensities (50%, 65%, and 80% of VO2max) under conditions of oral breathing or nasal breathing. Respiratory (respiration rate [RR], pulmonary ventilation [VE]), metabolic (oxygen consumption [VO2], carbon dioxide production [VCO2]) and efficiency measures (ventilatory equivalents for oxygen [Veq×O2-1] and carbon dioxide [Veq×CO2-1] were obtained. Data were analyzed utilizing a 2 (sex) x 2 (condition) x3 (intensity) repeated measures ANOVA with significance accepted at p≤0.05. Results: Significant interactions existed between breathing mode and intensity such that oral breathing resulted in greater RR, VE, VO2, and VCO2 at all three submaximal intensities (p<.05).  Veq×O2-1 and Veq×CO2-1 presented findings that nasal breathing was more efficient than oral breathing during the 65% and 80% VO2max intensities (p<0.05). Conclusion: Based on this analysis, oral breathing provides greater respiratory and metabolic volumes during moderate and moderate-to-high submaximal exercise intensities, but may not translate to greater respiratory efficiency. However when all variables are considered together, it is likely that oral breathing represents the more efficient mode, particularly at higher exercise intensities

    Chronotype and Social Jetlag Influence Performance and Injury during Reserve Officers’ Training Corps Physical Training

    Get PDF
    Sleep and circadian rhythms are critically important for optimal physical performance and maintaining health during training. Chronotype and altered sleep may modulate the response to exercise training, especially when performed at specific times/days, which may contribute to musculoskeletal injury. The purpose of this study was to determine if cadet characteristics (chronotype, sleep duration, and social jetlag) were associated with injury incidence and inflammation during physical training. Reserve Officers’ Training Corps (ROTC) cadets (n = 42) completed the Morningness/Eveningness Questionnaire to determine chronotype, and 1-week sleep logs to determine sleep duration and social jetlag. Salivary IL-6 was measured before and after the first and fourth exercise sessions during training. Prospective injury incidence was monitored over 14 weeks of training, and Army Physical Fitness Test scores were recorded at the conclusion. Chronotype, sleep duration, and social jetlag were assessed as independent factors impacting IL-6, injury incidence, and APFT scores using ANOVAs, chi-squared tests, and the t-test where appropriate, with significance accepted at p \u3c 0.05. Evening chronotypes performed worse on the APFT (evening = 103.8 ± 59.8 vs. intermediate = 221.9 ± 40.3 vs. morning = 216.6 ± 43.6; p \u3c 0.05), with no difference in injury incidence. Sleep duration did not significantly impact APFT score or injury incidence. Social jetlag was significantly higher in injured vs. uninjured cadets (2:40 ± 1:03 vs. 1:32 ± 55, p \u3c 0.05). Exercise increased salivary IL-6, with no significant effects of chronotype, sleep duration, or social jetlag. Evening chronotypes and cadets with social jetlag display hampered performance during morning APFT. Social jetlag may be a behavioral biomarker for musculoskeletal injury risk, which requires further investigation
    • …
    corecore