123,157 research outputs found

    Assessment of animal African trypanosomiasis (AAT) vulnerability in cattle-owning communities of sub-Saharan Africa

    Get PDF
    Background: Animal African trypanosomiasis (AAT) is one of the biggest constraints to livestock production and a threat to food security in sub-Saharan Africa. In order to optimise the allocation of resources for AAT control, decision makers need to target geographic areas where control programmes are most likely to be successful and sustainable and select control methods that will maximise the benefits obtained from resources invested. Methods: The overall approach to classifying cattle-owning communities in terms of AAT vulnerability was based on the selection of key variables collected through field surveys in five sub-Saharan Africa countries followed by a formal Multiple Correspondence Analysis (MCA) to identify factors explaining the variations between areas. To categorise the communities in terms of AAT vulnerability profiles, Hierarchical Cluster Analysis (HCA) was performed. Results: Three clusters of community vulnerability profiles were identified based on farmers’ beliefs with respect to trypanosomiasis control within the five countries studied. Cluster 1 communities, mainly identified in Cameroon, reported constant AAT burden, had large trypanosensitive (average herd size = 57) communal grazing cattle herds. Livestock (cattle and small ruminants) were reportedly the primary source of income in the majority of these cattle-owning households (87.0 %). Cluster 2 communities identified mainly in Burkina Faso and Zambia, with some Ethiopian communities had moderate herd sizes (average = 16) and some trypanotolerant breeds (31.7 %) practicing communal grazing. In these communities there were some concerns regarding the development of trypanocide resistance. Crops were the primary income source while communities in this cluster incurred some financial losses due to diminished draft power. The third cluster contained mainly Ugandan and Ethiopian communities which were mixed farmers with smaller herd sizes (average = 8). The costs spent diagnosing and treating AAT were moderate here. Conclusions: Understanding how cattle-owners are affected by AAT and their efforts to manage the disease is critical to the design of suitable locally-adapted control programmes. It is expected that the results could inform priority setting and the development of tailored recommendations for AAT control strategies

    Performance of Small Cluster Surveys and the Clustered LQAS Design to estimate Local-level Vaccination Coverage in Mali

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required.</p> <p>Methods</p> <p>We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A.</p> <p>Results</p> <p>VC estimates provided by a 10 Ă— 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 Ă— 15 to 10 Ă— 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans.</p> <p>Conclusions</p> <p>Small sample cluster surveys (10 Ă— 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.</p

    Kernel Logistic Regression-linear for Leukemia Classification Using High Dimensional Data

    Get PDF
    Kernel Logistic Regression (KLR) is one of the statistical models that has been proposed for classification in the machine learning and data mining communities, and also one of the effective methodologies in the kernel–machine techniques. Basely, KLR is kernelized version of linear Logistic Regression (LR). Unlike LR, KLR has ability to classify data with non linear boundary and also can accommodate data with very high dimensional and very few instances. In this research, we proposed to study the use of Linear Kernel on KLR in order to increase the accuracy of Leukemia Classification. Leukemia is one of the cancer types that causes mortality in medical diagnosis problem. Improving the accuracy of Leukemia Classification is essential for more effective diagnosis and treatment of Leukemia disease. The Leukemia data sets consists of 7120 (very high dimensional) DNA micro arrays data of 72 (very few instances) patient samples on the state of Leukemia types. In Leukemia classification based upon gene expression, monitoring data using DNA micro array offer hope to achieve an objective and highly accurate classification. It can be demonstrated that the use of Linear Kernel on Kernel Logistic Regression (KLR–Linear) can improve the performance in classifying Leukemia patient samples and also can be shown that KLR–Linear has better accuracy than KLR–Polynomial and Penalized Logistic Regression

    Microbial Similarity between Students in a Common Dormitory Environment Reveals the Forensic Potential of Individual Microbial Signatures.

    Get PDF
    The microbiota of the built environment is an amalgamation of both human and environmental sources. While human sources have been examined within single-family households or in public environments, it is unclear what effect a large number of cohabitating people have on the microbial communities of their shared environment. We sampled the public and private spaces of a college dormitory, disentangling individual microbial signatures and their impact on the microbiota of common spaces. We compared multiple methods for marker gene sequence clustering and found that minimum entropy decomposition (MED) was best able to distinguish between the microbial signatures of different individuals and was able to uncover more discriminative taxa across all taxonomic groups. Further, weighted UniFrac- and random forest-based graph analyses uncovered two distinct spheres of hand- or shoe-associated samples. Using graph-based clustering, we identified spheres of interaction and found that connection between these clusters was enriched for hands, implicating them as a primary means of transmission. In contrast, shoe-associated samples were found to be freely interacting, with individual shoes more connected to each other than to the floors they interact with. Individual interactions were highly dynamic, with groups of samples originating from individuals clustering freely with samples from other individuals, while all floor and shoe samples consistently clustered together.IMPORTANCE Humans leave behind a microbial trail, regardless of intention. This may allow for the identification of individuals based on the "microbial signatures" they shed in built environments. In a shared living environment, these trails intersect, and through interaction with common surfaces may become homogenized, potentially confounding our ability to link individuals to their associated microbiota. We sought to understand the factors that influence the mixing of individual signatures and how best to process sequencing data to best tease apart these signatures

    How is a data-driven approach better than random choice in label space division for multi-label classification?

    Full text link
    We propose using five data-driven community detection approaches from social networks to partition the label space for the task of multi-label classification as an alternative to random partitioning into equal subsets as performed by RAkELd: modularity-maximizing fastgreedy and leading eigenvector, infomap, walktrap and label propagation algorithms. We construct a label co-occurence graph (both weighted an unweighted versions) based on training data and perform community detection to partition the label set. We include Binary Relevance and Label Powerset classification methods for comparison. We use gini-index based Decision Trees as the base classifier. We compare educated approaches to label space divisions against random baselines on 12 benchmark data sets over five evaluation measures. We show that in almost all cases seven educated guess approaches are more likely to outperform RAkELd than otherwise in all measures, but Hamming Loss. We show that fastgreedy and walktrap community detection methods on weighted label co-occurence graphs are 85-92% more likely to yield better F1 scores than random partitioning. Infomap on the unweighted label co-occurence graphs is on average 90% of the times better than random paritioning in terms of Subset Accuracy and 89% when it comes to Jaccard similarity. Weighted fastgreedy is better on average than RAkELd when it comes to Hamming Loss
    • …
    corecore