225 research outputs found

    Impact of tumor size and tracer uptake heterogeneity in (18)F-FDG PET and CT non-small cell lung cancer tumor delineation.: 18F-FDG PET and CT tumor delineation in NSCLC

    Get PDF
    International audienceUNLABELLED: The objectives of this study were to investigate the relationship between CT- and (18)F-FDG PET-based tumor volumes in non-small cell lung cancer (NSCLC) and the impact of tumor size and uptake heterogeneity on various approaches to delineating uptake on PET images. METHODS: Twenty-five NSCLC cancer patients with (18)F-FDG PET/CT were considered. Seventeen underwent surgical resection of their tumor, and the maximum diameter was measured. Two observers manually delineated the tumors on the CT images and the tumor uptake on the corresponding PET images, using a fixed threshold at 50% of the maximum (T(50)), an adaptive threshold methodology, and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Maximum diameters of the delineated volumes were compared with the histopathology reference when available. The volumes of the tumors were compared, and correlations between the anatomic volume and PET uptake heterogeneity and the differences between delineations were investigated. RESULTS: All maximum diameters measured on PET and CT images significantly correlated with the histopathology reference (r > 0.89, P < 0.0001). Significant differences were observed among the approaches: CT delineation resulted in large overestimation (+32% ± 37%), whereas all delineations on PET images resulted in underestimation (from -15% ± 17% for T(50) to -4% ± 8% for FLAB) except manual delineation (+8% ± 17%). Overall, CT volumes were significantly larger than PET volumes (55 ± 74 cm(3) for CT vs. from 18 ± 25 to 47 ± 76 cm(3) for PET). A significant correlation was found between anatomic tumor size and heterogeneity (larger lesions were more heterogeneous). Finally, the more heterogeneous the tumor uptake, the larger was the underestimation of PET volumes by threshold-based techniques. CONCLUSION: Volumes based on CT images were larger than those based on PET images. Tumor size and tracer uptake heterogeneity have an impact on threshold-based methods, which should not be used for the delineation of cases of large heterogeneous NSCLC, as these methods tend to largely underestimate the spatial extent of the functional tumor in such cases. For an accurate delineation of PET volumes in NSCLC, advanced image segmentation algorithms able to deal with tracer uptake heterogeneity should be preferred

    Automated data analysis of unstructured grey literature in health research: A mapping review

    Get PDF
    \ua9 2023 The Authors. Research Synthesis Methods published by John Wiley &amp; Sons Ltd. The amount of grey literature and ‘softer’ intelligence from social media or websites is vast. Given the long lead-times of producing high-quality peer-reviewed health information, this is causing a demand for new ways to provide prompt input for secondary research. To our knowledge, this is the first review of automated data extraction methods or tools for health-related grey literature and soft data, with a focus on (semi)automating horizon scans, health technology assessments (HTA), evidence maps, or other literature reviews. We searched six databases to cover both health- and computer-science literature. After deduplication, 10% of the search results were screened by two reviewers, the remainder was single-screened up to an estimated 95% sensitivity; screening was stopped early after screening an additional 1000 results with no new includes. All full texts were retrieved, screened, and extracted by a single reviewer and 10% were checked in duplicate. We included 84 papers covering automation for health-related social media, internet fora, news, patents, government agencies and charities, or trial registers. From each paper, we extracted data about important functionalities for users of the tool or method; information about the level of support and reliability; and about practical challenges and research gaps. Poor availability of code, data, and usable tools leads to low transparency regarding performance and duplication of work. Financial implications, scalability, integration into downstream workflows, and meaningful evaluations should be carefully planned before starting to develop a tool, given the vast amounts of data and opportunities those tools offer to expedite research

    Automated Alphabet Reduction for Protein Datasets

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We investigate automated and generic alphabet reduction techniques for protein structure prediction datasets. Reducing alphabet cardinality without losing key biochemical information opens the door to potentially faster machine learning, data mining and optimization applications in structural bioinformatics. Furthermore, reduced but informative alphabets often result in, e.g., more compact and human-friendly classification/clustering rules. In this paper we propose a robust and sophisticated alphabet reduction protocol based on mutual information and state-of-the-art optimization techniques.</p> <p>Results</p> <p>We applied this protocol to the prediction of two protein structural features: contact number and relative solvent accessibility. For both features we generated alphabets of two, three, four and five letters. The five-letter alphabets gave prediction accuracies statistically similar to that obtained using the full amino acid alphabet. Moreover, the automatically designed alphabets were compared against other reduced alphabets taken from the literature or human-designed, outperforming them. The differences between our alphabets and the alphabets taken from the literature were quantitatively analyzed. All the above process had been performed using a primary sequence representation of proteins. As a final experiment, we extrapolated the obtained five-letter alphabet to reduce a, much richer, protein representation based on evolutionary information for the prediction of the same two features. Again, the performance gap between the full representation and the reduced representation was small, showing that the results of our automated alphabet reduction protocol, even if they were obtained using a simple representation, are also able to capture the crucial information needed for state-of-the-art protein representations.</p> <p>Conclusion</p> <p>Our automated alphabet reduction protocol generates competent reduced alphabets tailored specifically for a variety of protein datasets. This process is done without any domain knowledge, using information theory metrics instead. The reduced alphabets contain some unexpected (but sound) groups of amino acids, thus suggesting new ways of interpreting the data.</p

    From biochemical markers to molecular endotypes of osteoarthritis: a review on validated biomarkers

    Get PDF
    \ua9 2024 The Author(s). Published by Informa UK Limited, trading as Taylor &amp; Francis Group.Introduction: Osteoarthritis (OA) affects over 500 million people worldwide. OA patients are symptomatically treated, and current therapies exhibit marginal efficacy and frequently carry safety-risks associated with chronic use. No disease-modifying therapies have been approved to date leaving surgical joint replacement as a last resort. To enable effective patient care and successful drug development there is an urgent need to uncover the pathobiological drivers of OA and how these translate into disease endotypes. Endotypes provide a more precise and mechanistic definition of disease subgroups than observable phenotypes, and a panel of tissue- and pathology-specific biochemical markers may uncover treatable endotypes of OA. Areas covered: We have searched PubMed for full-text articles written in English to provide an in-depth narrative review of a panel of validated biochemical markers utilized for endotyping of OA and their association to key OA pathologies. Expert opinion: As utilized in IMI-APPROACH and validated in OAI-FNIH, a panel of biochemical markers may uncover disease subgroups and facilitate the enrichment of treatable molecular endotypes for recruitment in therapeutic clinical trials. Understanding the link between biochemical markers and patient-reported outcomes and treatable endotypes that may respond to given therapies will pave the way for new drug development in OA

    Multi-classifier prediction of knee osteoarthritis progression from incomplete imbalanced longitudinal data

    Full text link
    Conventional inclusion criteria used in osteoarthritis clinical trials are not very effective in selecting patients who would benefit from a therapy being tested. Typically majority of selected patients show no or limited disease progression during a trial period. As a consequence, the effect of the tested treatment cannot be observed, and the efforts and resources invested in running the trial are not rewarded. This could be avoided, if selection criteria were more predictive of the future disease progression. In this article, we formulated the patient selection problem as a multi-class classification task, with classes based on clinically relevant measures of progression (over a time scale typical for clinical trials). Using data from two long-term knee osteoarthritis studies OAI and CHECK, we tested multiple algorithms and learning process configurations (including multi-classifier approaches, cost-sensitive learning, and feature selection), to identify the best performing machine learning models. We examined the behaviour of the best models, with respect to prediction errors and the impact of used features, to confirm their clinical relevance. We found that the model-based selection outperforms the conventional inclusion criteria, reducing by 20-25% the number of patients who show no progression. This result might lead to more efficient clinical trials.Comment: 22 pages, 12 figures, 10 table
    corecore