1,650 research outputs found
A prospective analysis of the injury incidence of young male professional football players on artificial turf
Background: The effects of synthetic surfaces on the risk of injuries is still debated in literature and the majority of published data seems to be contradictory. For such reasons the understanding of injury incidence on such surfaces, especially in youth sport, is fundamental for injury prevention. Objectives: The aim of this study was to prospectively report the epidemiology of injuries in young football players, playing on artificial turfs, during a one sports season. Patients and Methods: 80 young male football players (age 16.1 ± 3.7 years; height 174 ± 6.6 cm; weight 64.2 ± 6.3 kg) were enrolled in a prospective cohort study. The participants were then divided in two groups; the first included players age ranging from 17 to 19 (OP) whereas the second included players age ranging from 13 to 16 (YP). Injury incidence was recorded prospectively, according to the consensus statement for soccer. Results: A total of 107 injuries (35 from the OP and 72 from the YP) were recorded during an exposure time of 83.760 hours (incidence 1.28/1000 per player hours); 22 during matches (incidence 2.84/1000 per player hours, 20.5%) and 85 during training (incidence 1.15/1000 per player hours, 79.5%). Thigh and groin were the most common injury locations (33.6% and 21.5%, respectively) while muscle injuries such as contractures and strains were the most common injury typologies (68.23%). No statistical differences between groups were displayed, except for the rate of severe injuries during matches, with the OP displaying slightly higher rates compared to the YP. Severe injuries accounted for 10.28% of the total injuries reported. The average time lost due to injuries was 14 days. Re-injuries accounted for 4.67% of all injuries sustained during the season. Conclusions: In professional youth soccer injury rates are reasonably low. Muscle injuries are the most common type of injuries while groin and thigh the most common locations. Artificial turf pitches don’t seem to contribute to injury incidence in young football players
Visualizing 1D Regression
Regression is the study of the conditional distribution of the response y given the predictors x. In a 1D regression, y is independent of x given a single linear combination βTx of the predictors. Special cases of 1D regression include multiple linear regression, binary regression and generalized linear models. If a good estimate ˆb of some non-zero multiple cβ of β can be constructed, then the 1D regression can be visualized with a scatterplot of ˆbTx versus y. A resistant method for estimating cβ is presented along with applications
Novel hypophysiotropic AgRP2 neurons and pineal cells revealed by BAC transgenesis in zebrafish
The neuropeptide agouti-related protein (AgRP) is expressed in the arcuate nucleus of the mammalian hypothalamus and plays a key role in regulating food consumption and energy homeostasis. Fish express two agrp genes in the brain: agrp1, considered functionally homologous with the mammalian AgRP, and agrp2. The role of agrp2 and its relationship to agrp1 are not fully understood. Utilizing BAC transgenesis, we generated transgenic zebrafish in which agrp1- and agrp2-expressing cells can be visualized and manipulated. By characterizing these transgenic lines, we showed that agrp1-expressing neurons are located in the ventral periventricular hypothalamus (the equivalent of the mammalian arcuate nucleus), projecting throughout the hypothalamus and towards the preoptic area. The agrp2 gene was expressed in the pineal gland in a previously uncharacterized subgroup of cells. Additionally, agrp2 was expressed in a small group of neurons in the preoptic area that project directly towards the pituitary and form an interface with the pituitary vasculature, suggesting that preoptic AgRP2 neurons are hypophysiotropic. We showed that direct synaptic connection can exist between AgRP1 and AgRP2 neurons in the hypothalamus, suggesting communication and coordination between AgRP1 and AgRP2 neurons and, therefore, probably also between the processes they regulate
The inevitable QSAR renaissance
QSAR approaches, including recent advances in 3D-QSAR, are advantageous during the lead optimization phase of drug discovery and complementary with bioinformatics and growing data accessibility. Hints for future QSAR practitioners are also offered
The piano music of Sterndale Bennett in the context of nineteenth-century pianism : a practice-based interpretive study with critical commentary
Sterndale Bennett (1816 - 75) made a significant contribution to piano music and pianism in London during the nineteenth-century, as evidenced by his substantial work list (see Appendix A). The aim of this thesis is to show how a knowledge of the performance practices of his time and of his own approach to style and interpretation can illuminate the performance of this repertoire. A secondary aim is to set this study within a clear historical framework and hence to make a strong connection between contextual and textual studies. An examination of his piano music and contemporary accounts of his piano playing reveal a conservative approach compared to other performers. The picture is amplified by an account of practices described in nineteenth-century writings on performance and of the differences between English and Viennese pianos. In the recordings, music by Sterndale Bennett is juxtaposed with music by selected predecessors and contemporaries, not only to show how his music relates to the nineteenth-century continuum, but also to present in sharp relief his special stylistic qualities. Some of the recordings reflect the work of members of the London Pianoforte School. The justification for this twentieth-century grouping is discussed in Chapter 1 in the context of London musical life and pianism in the nineteenthcentury, with reference to contemporary opinion-formers. The influence of Mozart and of the revival of Baroque keyboard music on Sterndale Bennett are also discussed. Publishing practices of the period are examined in Chapter 2, leading to a survey of Sterndale Bennett's sources and publications. Chapter 3 investigates approaches to nineteenth-century pianism, drawing on contemporary documents and secondary sources, comparing them with the preserved evidence we have regarding Sterndale Bennett's own stance on these matters. This process reveals, in many cases, that Sterndale Bennett represented a more scholarly and less commercial approach to piano playing than was prevalent among contemporaries such as Kalkbrenner, Thalberg and others. Finally, this study offers a paradigm for reinvigorating an historic but largely moribund repertoire incorporating it into contemporary practice.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
DPRESS: Localizing estimates of predictive uncertainty
<p>Abstract</p> <p>Background</p> <p>The need to have a quantitative estimate of the uncertainty of prediction for QSAR models is steadily increasing, in part because such predictions are being widely distributed as tabulated values disconnected from the models used to generate them. Classical statistical theory assumes that the error in the population being modeled is independent and identically distributed (IID), but this is often not actually the case. Such inhomogeneous error (heteroskedasticity) can be addressed by providing an individualized estimate of predictive uncertainty for each particular new object <it>u</it>: the standard error of prediction <it>s</it><sub>u </sub>can be estimated as the non-cross-validated error <it>s</it><sub>t* </sub>for the closest object <it>t</it>* in the training set adjusted for its separation <it>d </it>from <it>u </it>in the descriptor space relative to the size of the training set.</p> <p><display-formula><graphic file="1758-2946-1-11-i1.gif"/></display-formula></p> <p>The predictive uncertainty factor <it>γ</it><sub>t* </sub>is obtained by distributing the internal predictive error sum of squares across objects in the training set based on the distances between them, hence the acronym: <it>D</it>istributed <it>PR</it>edictive <it>E</it>rror <it>S</it>um of <it>S</it>quares (DPRESS). Note that <it>s</it><sub>t* </sub>and <it>γ</it><sub>t*</sub>are characteristic of each training set compound contributing to the model of interest.</p> <p>Results</p> <p>The method was applied to partial least-squares models built using 2D (molecular hologram) or 3D (molecular field) descriptors applied to mid-sized training sets (<it>N </it>= 75) drawn from a large (<it>N </it>= 304), well-characterized pool of cyclooxygenase inhibitors. The observed variation in predictive error for the external 229 compound test sets was compared with the uncertainty estimates from DPRESS. Good qualitative and quantitative agreement was seen between the distributions of predictive error observed and those predicted using DPRESS. Inclusion of the distance-dependent term was essential to getting good agreement between the estimated uncertainties and the observed distributions of predictive error. The uncertainty estimates derived by DPRESS were conservative even when the training set was biased, but not excessively so.</p> <p>Conclusion</p> <p>DPRESS is a straightforward and powerful way to reliably estimate individual predictive uncertainties for compounds outside the training set based on their distance to the training set and the internal predictive uncertainty associated with its nearest neighbor in that set. It represents a sample-based, <it>a posteriori </it>approach to defining applicability domains in terms of localized uncertainty.</p
Modeling of failure mode in knee ligaments depending on the strain rate
BACKGROUND: The failure mechanism of the knee ligament (bone-ligament-bone complex) at different strain rates is an important subject in the biomechanics of the knee. This study reviews and summarizes the literature describing ligament injury as a function of stain rate, which has been published during the last 30 years. METHODS: Three modes of injury are presented as a function of strain rate, and they are used to analyze the published cases. The number of avulsions is larger than that of ligament tearing in mode I. There is no significant difference between the number of avulsions and ligament tearing in mode II. Ligament tearing happens more frequently than avulsion in mode III. RESULTS: When the strain rate increases, the order of mode is mode I, II, III, I, and II. Analytical models of ligament behavior as a function of strain rate are also presented and used to provide an integrated framework for describing all of the failure regimes. In addition, this study showed the failure mechanisms with different specimens, ages, and strain rates. CONCLUSION: There have been several a numbers of studies of ligament failure under various conditions including widely varying strain rates. One issue in these studies is whether ligament failure occurs mid-ligament or at the bone attachment point, with assertions that this is a function of the strain rate. However, over the range of strain rates and other conditions reported, there has appeared to be discrepancies in the conclusions on the effect of strain rate. The analysis and model presented here provides a unifying assessment of the previous disparities, emphasizing the differential effect of strain rate on the relative strengths of the ligament and the attachment
Evaluation of machine-learning methods for ligand-based virtual screening
Machine-learning methods can be used for virtual screening by analysing the structural characteristics of molecules of known (in)activity, and we here discuss the use of kernel discrimination and naive Bayesian classifier (NBC) methods for this purpose. We report a kernel method that allows the processing of molecules represented by binary, integer and real-valued descriptors, and show that it is little different in screening performance from a previously described kernel that had been developed specifically for the analysis of binary fingerprint representations of molecular structure. We then evaluate the performance of an NBC when the training-set contains only a very few active molecules. In such cases, a simpler approach based on group fusion would appear to provide superior screening performance, especially when structurally heterogeneous datasets are to be processed
Prospective comparison of novel dark blood late gadolinium enhancement with conventional bright blood imaging for the detection of scar
BACKGROUND: Conventional bright blood late gadolinium enhancement (bright blood LGE) imaging is a routine cardiovascular magnetic resonance (CMR) technique offering excellent contrast between areas of LGE and normal myocardium. However, contrast between LGE and blood is frequently poor. Dark blood LGE (DB LGE) employs an inversion recovery T2 preparation to suppress the blood pool, thereby increasing the contrast between the endocardium and blood. The objective of this study is to compare the diagnostic utility of a novel DB phase sensitive inversion recovery (PSIR) LGE CMR sequence to standard bright blood PSIR LGE. METHODS: One hundred seventy-two patients referred for clinical CMR were scanned. A full left ventricle short axis stack was performed using both techniques, varying which was performed first in a 1:1 ratio. Two experienced observers analyzed all bright blood LGE and DB LGE stacks, which were randomized and anonymized. A scoring system was devised to quantify the presence and extent of gadolinium enhancement and the confidence with which the diagnosis could be made. RESULTS: A total of 2752 LV segments were analyzed. There was very good inter-observer correlation for quantifying LGE. DB LGE analysis found 41.5% more segments that exhibited hyperenhancement in comparison to bright blood LGE (248/2752 segments (9.0%) positive for LGE with bright blood; 351/2752 segments (12.8%) positive for LGE with DB; p < 0.05). DB LGE also allowed observers to be more confident when diagnosing LGE (bright blood LGE high confidence in 154/248 regions (62.1%); DB LGE in 275/324 (84.9%) regions (p < 0.05)). Eighteen patients with no bright blood LGE were found to have had DB LGE, 15 of whom had no known history of myocardial infarction. CONCLUSIONS: DB LGE significantly increases LGE detection compared to standard bright blood LGE. It also increases observer confidence, particularly for subendocardial LGE, which may have important clinical implications
- …