2,865 research outputs found

    Automated detection of hyperreflective foci in the outer nuclear layer of the retina

    Get PDF
    PURPOSE: Hyperreflective foci are poorly understood transient elements seen on optical coherence tomography (OCT) of the retina in both healthy and diseased eyes. Systematic studies may benefit from the development of automated tools that can map and track such foci. The outer nuclear layer (ONL) of the retina is an attractive layer in which to study hyperreflective foci as it has no fixed hyperreflective elements in healthy eyes. In this study, we intended to evaluate whether automated image analysis can identify, quantify and visualize hyperreflective foci in the ONL of the retina. METHODS: This longitudinal exploratory study investigated 14 eyes of seven patients including six patients with optic neuropathy and one with mild non-proliferative diabetic retinopathy. In total, 2596 OCT B-scan were obtained. An image analysis blob detector algorithm was used to detect candidate foci, and a convolutional neural network (CNN) trained on a manually labelled subset of data was then used to select those candidate foci in the ONL that fitted the characteristics of the reference foci best. RESULTS: In the manually labelled data set, the blob detector found 2548 candidate foci, correctly detecting 350 (89%) out of 391 manually labelled reference foci. The accuracy of CNN classifier was assessed by manually splitting the 2548 candidate foci into a training and validation set. On the validation set, the classifier obtained an accuracy of 96.3%, a sensitivity of 88.4% and a specificity of 97.5% (AUC 0.989). CONCLUSION: This study demonstrated that automated image analysis and machine learning methods can be used to successfully identify, quantify and visualize hyperreflective foci in the ONL of the retina on OCT scans

    The Allometry of Host-Pathogen Interactions

    Get PDF
    Understanding the mechanisms that control rates of disease progression in humans and other species is an important area of research relevant to epidemiology and to translating studies in small laboratory animals to humans. Body size and metabolic rate influence a great number of biological rates and times. We hypothesize that body size and metabolic rate affect rates of pathogenesis, specifically the times between infection and first symptoms or death.We conducted a literature search to find estimates of the time from infection to first symptoms (t(S)) and to death (t(D)) for five pathogens infecting a variety of bird and mammal hosts. A broad sampling of diseases (1 bacterial, 1 prion, 3 viruses) indicates that pathogenesis is controlled by the scaling of host metabolism. We find that the time for symptoms to appear is a constant fraction of time to death in all but one disease. Our findings also predict that many population-level attributes of disease dynamics are likely to be expressed as dimensionless quantities that are independent of host body size.Our results show that much variability in host pathogenesis can be described by simple power functions consistent with the scaling of host metabolic rate. Assessing how disease progression is controlled by geometric relationships will be important for future research. To our knowledge this is the first study to report the allometric scaling of host/pathogen interactions

    Mendelian randomization for studying the effects of perturbing drug targets [version 1; peer review: awaiting peer review]

    Get PDF
    Drugs whose targets have genetic evidence to support efficacy and safety are more likely to be approved after clinical development. In this paper, we provide an overview of how natural sequence variation in the genes that encode drug targets can be used in Mendelian randomization analyses to offer insight into mechanism-based efficacy and adverse effects. Large databases of summary level genetic association data are increasingly available and can be leveraged to identify and validate variants that serve as proxies for drug target perturbation. As with all empirical research, Mendelian randomization has limitations including genetic confounding, its consideration of lifelong effects, and issues related to heterogeneity across different tissues and populations. When appropriately applied, Mendelian randomization provides a useful empirical framework for using population level data to improve the success rates of the drug development pipeline

    Mendelian randomization for studying the effects of perturbing drug targets [version 2; peer review: 3 approved, 1 approved with reservations]

    Get PDF
    Drugs whose targets have genetic evidence to support efficacy and safety are more likely to be approved after clinical development. In this paper, we provide an overview of how natural sequence variation in the genes that encode drug targets can be used in Mendelian randomization analyses to offer insight into mechanism-based efficacy and adverse effects. Large databases of summary level genetic association data are increasingly available and can be leveraged to identify and validate variants that serve as proxies for drug target perturbation. As with all empirical research, Mendelian randomization has limitations including genetic confounding, its consideration of lifelong effects, and issues related to heterogeneity across different tissues and populations. When appropriately applied, Mendelian randomization provides a useful empirical framework for using population level data to improve the success rates of the drug development pipeline

    Sizing Up Allometric Scaling Theory

    Get PDF
    Metabolic rate, heart rate, lifespan, and many other physiological properties vary with body mass in systematic and interrelated ways. Present empirical data suggest that these scaling relationships take the form of power laws with exponents that are simple multiples of one quarter. A compelling explanation of this observation was put forward a decade ago by West, Brown, and Enquist (WBE). Their framework elucidates the link between metabolic rate and body mass by focusing on the dynamics and structure of resource distribution networks—the cardiovascular system in the case of mammals. Within this framework the WBE model is based on eight assumptions from which it derives the well-known observed scaling exponent of 3/4. In this paper we clarify that this result only holds in the limit of infinite network size (body mass) and that the actual exponent predicted by the model depends on the sizes of the organisms being studied. Failure to clarify and to explore the nature of this approximation has led to debates about the WBE model that were at cross purposes. We compute analytical expressions for the finite-size corrections to the 3/4 exponent, resulting in a spectrum of scaling exponents as a function of absolute network size. When accounting for these corrections over a size range spanning the eight orders of magnitude observed in mammals, the WBE model predicts a scaling exponent of 0.81, seemingly at odds with data. We then proceed to study the sensitivity of the scaling exponent with respect to variations in several assumptions that underlie the WBE model, always in the context of finite-size corrections. Here too, the trends we derive from the model seem at odds with trends detectable in empirical data. Our work illustrates the utility of the WBE framework in reasoning about allometric scaling, while at the same time suggesting that the current canonical model may need amendments to bring its predictions fully in line with available datasets

    Formulae for the Analysis of the Flavor-Tagged Decay B^0_s --> Jpsi phi

    Get PDF
    Differential rates in the decay B^0_s --> Jpsi phi, with phi --> K^+K^- and Jpsi --> mu^+ mu^- are sensitive to the CP-violation phase beta_s, predicted to be very small in the standard model. The analysis of B^0_s --> Jpsi phi decays is also suitable for measuring the B^0_s lifetime, the decay width difference DeltaGamma_s between the B^0_s mass eigenstates, and the B^0_s oscillation frequency Delta m even if appreciable CP violation does not occur. In this paper we present normalized probability densities useful in maximum likelihood fits, extended to allow for S-wave contributions on one hand and for the effects of direct CP violation on the other. Our treatment of the S-wave contributions includes the strong variation of the S-wave/P-wave amplitude ratio with m(K^+K^-) across the phi resonance, which was not considered in previous work. We include a scheme for re-normalizing the probability densities after detector sculpting of the angular distributions of the final state particles, and conclude with an examination of the symmetries of the rate formulae, with and without an S-wave contribution. All results are obtained with the use of a new compact formalism describing the differential decay rate of B^0_s mesons into Jpsi phi final states.Comment: 19 pages, no figures. Revised for JHE

    13,915 reasons for equity in sexual offences legislation: A national school-based survey in South Africa

    Get PDF
    <p>Abstract</p> <p>Objective</p> <p>Prior to 2007, forced sex with male children in South Africa did not count as rape but as "indecent assault", a much less serious offence. This study sought to document prevalence of male sexual violence among school-going youth.</p> <p>Design</p> <p>A facilitated self-administered questionnaire in nine of the 11 official languages in a stratified (province/metro/urban/rural) last stage random national sample.</p> <p>Setting</p> <p>Teams visited 5162 classes in 1191 schools, in October and November 2002.</p> <p>Participants</p> <p>A total of 269,705 learners aged 10–19 years in grades 6–11. Of these, 126,696 were male.</p> <p>Main outcome measures</p> <p>Schoolchildren answered questions about exposure in the last year to insults, beating, unwanted touching and forced sex. They indicated the sex of the perpetrator, and whether this was a family member, a fellow schoolchild, a teacher or another adult. Respondents also gave the age when they first suffered forced sex and when they first had consensual sex.</p> <p>Results</p> <p>Some 9% (weighted value based on 13915/127097) of male respondents aged 11–19 years reported forced sex in the last year. Of those aged 18 years at the time of the survey, 44% (weighted value of 5385/11450) said they had been forced to have sex in their lives and 50% reported consensual sex. Perpetrators were most frequently an adult not from their own family, followed closely in frequency by other schoolchildren. Some 32% said the perpetrator was male, 41% said she was female and 27% said they had been forced to have sex by both male and female perpetrators. Male abuse of schoolboys was more common in rural areas while female perpetration was more an urban phenomenon.</p> <p>Conclusion</p> <p>This study uncovers endemic sexual abuse of male children that was suspected but hitherto only poorly documented. Legal recognition of the criminality of rape of male children is a first step. The next steps include serious investment in supporting male victims of abuse, and in prevention of all childhood sexual abuse.</p

    The Hubble Constant

    Get PDF
    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0H_0 values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68km/s/Mpc and typical errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.Comment: Extensively revised and updated since the 2007 version: accepted by Living Reviews in Relativity as a major (2014) update of LRR 10, 4, 200
    • …
    corecore