459 research outputs found
Retinal blood vessels extraction using probabilistic modelling
© 2014 Kaba et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.The analysis of retinal blood vessels plays an important role in detecting and treating retinal diseases. In this review, we present an automated method to segment blood vessels of fundus retinal image. The proposed method could be used to support a non-intrusive diagnosis in modern ophthalmology for early detection of retinal diseases, treatment evaluation or clinical study. This study combines the bias correction and an adaptive histogram equalisation to enhance the appearance of the blood vessels. Then the blood vessels are extracted using probabilistic modelling that is optimised by the expectation maximisation algorithm. The method is evaluated on fundus retinal images of STARE and DRIVE datasets. The experimental results are compared with some recently published methods of retinal blood vessels segmentation. The experimental results show that our method achieved the best overall performance and it is comparable to the performance of human experts.The Department of Information Systems, Computing and Mathematics, Brunel University
A proposed quantitative methodology for the evaluation of the effectiveness of Human Element, Leadership and Management (HELM) training in the UK
In 2006, a review of maritime accidents found that non-technical skills (NTSs) are the single largest contributing factor towards such incidents. NTSs are composed of both interpersonal and cognitive elements. These include things such as situational awareness, teamwork, decision making, leadership, management and communication skills. In a crisis situation, good NTSs allow a deck officer to quickly recognise that a problem exists and then harness the resources that are at their disposal to safely and efficiently bring the situation back under control. This paper has two aims. The first is to develop a methodology which will enable educators to quantitatively assess the impact of Maritime and Coastguard Agency (MCA)-approved Human Element, Leadership and Management (HELM) training on deck officer’s NTSs with a view to identifying further training requirements. The second is to determine whether the HELM training provided to develop the NTSs of trainee deck officers is fit for purpose. To achieve these aims, a three-phase approach was adopted. Initially, a taxonomy for deck officer’s NTSs is established, behavioural markers are identified and the relative importance of each attribute is calculated using the analytical hierarchy process (AHP). Subsequently, a set of scenarios were identified for the assessment of deck officer’s NTSs in a ship bridge simulator environment. A random selection of students that have completed the Chief Mate (CM) programme was performed, and data regarding their NTS-related performance in the scenarios was collected. Finally, the collected data was fed into the evidential reasoning (ER) algorithm, utility values were produced and, having established these values, the effectiveness of the HELM training that the students have received was then evaluated
A frequentist framework of inductive reasoning
Reacting against the limitation of statistics to decision procedures, R. A.
Fisher proposed for inductive reasoning the use of the fiducial distribution, a
parameter-space distribution of epistemological probability transferred
directly from limiting relative frequencies rather than computed according to
the Bayes update rule. The proposal is developed as follows using the
confidence measure of a scalar parameter of interest. (With the restriction to
one-dimensional parameter space, a confidence measure is essentially a fiducial
probability distribution free of complications involving ancillary statistics.)
A betting game establishes a sense in which confidence measures are the only
reliable inferential probability distributions. The equality between the
probabilities encoded in a confidence measure and the coverage rates of the
corresponding confidence intervals ensures that the measure's rule for
assigning confidence levels to hypotheses is uniquely minimax in the game.
Although a confidence measure can be computed without any prior distribution,
previous knowledge can be incorporated into confidence-based reasoning. To
adjust a p-value or confidence interval for prior information, the confidence
measure from the observed data can be combined with one or more independent
confidence measures representing previous agent opinion. (The former confidence
measure may correspond to a posterior distribution with frequentist matching of
coverage probabilities.) The representation of subjective knowledge in terms of
confidence measures rather than prior probability distributions preserves
approximate frequentist validity.Comment: major revisio
Contextualizing students' alcohol use perceptions and practices within French culture: an analysis of gender and drinking among sport-science college students
Although research has examined alcohol consumption and sport in a variety of contexts, there is a paucity of research on gender and gender dynamics among French college students. The present study addresses this gap in the literature by examining alcohol use practices by men and women among a non-probability sample of French sport science students from five different universities in Northern France. We utilized both survey data (N = 534) and in-depth qualitative interviews (n = 16) to provide empirical and theoretical insight into a relatively ubiquitous health concern: the culture of intoxication. Qualitative data were based on students’ perceptions of their own alcohol use; analysis were framed by theoretical conceptions of gender. Survey results indicate gender differences in alcohol consumption wherein men reported a substantially higher frequency and quantity of alcohol use compared to their female peers. Qualitative findings confirm that male privilege and women’s concern for safety, masculine embodiment via alcohol use, gendering of alcohol type, and gender conformity pressures shape gender disparities in alcohol use behavior. Our findings also suggest that health education policy and educational programs focused on alcohol-related health risks need to be designed to take into account gender category and gender orientation
Automatic Robust Neurite Detection and Morphological Analysis of Neuronal Cell Cultures in High-content Screening
Cell-based high content screening (HCS) is becoming an important and increasingly favored
approach in therapeutic drug discovery and functional genomics. In HCS, changes in cellular morphology and biomarker distributions provide an information-rich profile of cellular responses to experimental treatments such as small molecules or gene knockdown probes. One obstacle that currently exists with such cell-based assays is the availability of image processing algorithms that are capable of reliably and automatically analyzing large HCS image sets. HCS images of primary neuronal cell cultures are particularly challenging to analyze due to complex cellular morphology.
Here we present a robust method for quantifying and statistically analyzing the morphology of neuronal cells in HCS images. The major advantages of our method over existing software lie in its capability to correct non-uniform illumination using the contrast-limited adaptive histogram equalization method; segment neuromeres using Gabor-wavelet texture analysis; and detect faint neurites by a novel phase-based neurite extraction algorithm that is invariant to changes in illumination and contrast and can accurately localize neurites. Our method was successfully applied to analyze a large HCS image set generated in a morphology screen for polyglutaminemediated neuronal toxicity using primary neuronal cell cultures derived from embryos of a Drosophila Huntington’s Disease (HD) model.National Institutes of Health (U.S.) (Grant
Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.
BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112
Genotype calling in tetraploid species from bi-allelic marker data using mixture models
<p>Abstract</p> <p>Background</p> <p>Automated genotype calling in tetraploid species was until recently not possible, which hampered genetic analysis. Modern genotyping assays often produce two signals, one for each allele of a bi-allelic marker. While ample software is available to obtain genotypes (homozygous for either allele, or heterozygous) for diploid species from these signals, such software is not available for tetraploid species which may be scored as five alternative genotypes (aaaa, baaa, bbaa, bbba and bbbb; nulliplex to quadruplex).</p> <p>Results</p> <p>We present a novel algorithm, implemented in the R package fitTetra, to assign genotypes for bi-allelic markers to tetraploid samples from genotyping assays that produce intensity signals for both alleles. The algorithm is based on the fitting of several mixture models with five components, one for each of the five possible genotypes. The models have different numbers of parameters specifying the relation between the five component means, and some of them impose a constraint on the mixing proportions to conform to Hardy-Weinberg equilibrium (HWE) ratios. The software rejects markers that do not allow a reliable genotyping for the majority of the samples, and it assigns a missing score to samples that cannot be scored into one of the five possible genotypes with sufficient confidence.</p> <p>Conclusions</p> <p>We have validated the software with data of a collection of 224 potato varieties assayed with an Illumina GoldenGate™ 384 SNP array and shown that all SNPs with informative ratio distributions are fitted. Almost all fitted models appear to be correct based on visual inspection and comparison with diploid samples. When the collection of potato varieties is analyzed as if it were a population, almost all markers seem to be in Hardy-Weinberg equilibrium. The R package fitTetra is freely available under the GNU Public License from <url>http://www.plantbreeding.wur.nl/UK/software_fitTetra.html</url> and as Additional files with this article.</p
Haplotype Estimation from Fuzzy Genotypes Using Penalized Likelihood
The Composite Link Model is a generalization of the generalized linear model in which expected values of observed counts are constructed as a sum of generalized linear components. When combined with penalized likelihood, it provides a powerful and elegant way to estimate haplotype probabilities from observed genotypes. Uncertain (“fuzzy”) genotypes, like those resulting from AFLP scores, can be handled by adding an extra layer to the model. We describe the model and the estimation algorithm. We apply it to a data set of accurate human single nucleotide polymorphism (SNP) and to a data set of fuzzy tomato AFLP scores
Segmentation and intensity estimation for microarray images with saturated pixels
<p>Abstract</p> <p>Background</p> <p>Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (2<sup>16 </sup>- 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.</p> <p>Results</p> <p>We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.</p> <p>Conclusions</p> <p>The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.</p
Estimating Incidence Curves of Several Infections Using Symptom Surveillance Data
We introduce a method for estimating incidence curves of several co-circulating infectious pathogens, where each infection has its own probabilities of particular symptom profiles. Our deconvolution method utilizes weekly surveillance data on symptoms from a defined population as well as additional data on symptoms from a sample of virologically confirmed infectious episodes. We illustrate this method by numerical simulations and by using data from a survey conducted on the University of Michigan campus. Last, we describe the data needs to make such estimates accurate
- …
