36 research outputs found

    An exploration of men's experiences of undergoing active surveillance for favourable-risk prostate cancer: A mixed methods study protocol

    Get PDF
    BACKGROUND: Prostate cancer is one of the most common male cancers worldwide. Active Surveillance (AS) has been developed to allow men with lower risk disease to postpone or avoid the adverse side effects associated with curative treatments until the disease progresses. Despite the medical benefits of AS, it is reported that living with untreated cancer can create a significant emotional burden for patients. METHODS/DESIGN: The aim of this study is to gain insight into the experiences of men eligible to undergo AS for favourable-risk PCa. This study has a mixed-methods sequential explanatory design consisting of two phases: quantitative followed by qualitative. Phase 1 has a multiple point, prospective, longitudinal exploratory design. Ninety men diagnosed with favourable-risk prostate cancer will be assessed immediately post-diagnosis (baseline) and followed over a period of 12 months, in intervals of 3 month. Ninety age-matched men with no cancer diagnosis will also be recruited using peer nomination and followed up in the same 3 month intervals. Following completion of Phase 1, 10-15 AS participants who have reported both the best and worst psychological functioning will be invited to participate in semi-structured qualitative interviews. Phase 2 will facilitate further exploration of the quantitative results and obtain a richer understanding of participants' personal interpretations of their illness and psychological wellbeing. DISCUSSION: To our knowledge, this is the first study to utilise early baseline measures; include a healthy comparison group; calculate sample size through power calculations; and use a mixed methods approach to gain a deeper more holistic insight into the experiences of men diagnosed with favourable-risk prostate cancer

    Symbiota Integrations: Exploration of Historical and Current Methods of Data Sharing Across a Decentralized Portal Network and Goals of Extending Interoperability Globally

    No full text
    Over the last decade, the Symbiota open-source software has been readily available to establish occurrence-based data portals that represent the taxonomic and geographic expertise of a specific community of researchers. Reasons for establishing a data portal vary, but often focus on:data mobilization via the creation of public data access points (e.g., in-house search and export tools, Application Programming Interface (API) access, publication tools pushing data up to aggregators);tools and workflows that support active specimen digitization projectsa method for staging and preparing datasets for analysis to answer specific research questions (e.g., data assessment, correction, augmentation).The software functions as a Content Management System (CMS) allowing any dataset to be collaboratively augmented, modified, and managed online. Currently, the software provides support for over 1000 collection datasets to manage their specimen data directly within a Symbiota portal as a live managed dataset. Portals often include “snapshot” data imported from externally managed systems, which are updated on a regular schedule. Depending on the goals of a project, portals will vary in the composition of live to snapshot collections, though most contain a mixture of both. In this respect, data portals serve as intermediate aggregators, integrating multiple specimen datasets that collectively represent a community-based research perspective.Symbiota portals typically function as mid-level data aggregators that are community driven by a group of researchers with expertise within a specific taxonomic domain. This decentralized approach has been shown to promote the emergence of multiple regionally, taxonomically, or institutionally localized, self-identifying communities of practice. Each community is empowered to control the social and informational design and versioning of their local data infrastructures and signals. The upfront cost of decentralization is more than offset by the long-term benefit of achieving sustained expert engagement, higher-quality data products, and ultimately more societal impact for biodiversity data.In contrast to the vision of pushing data from the source to the global aggregators and ultimately out to the research community, Symbiota records are distributed across a growing array of sub-aggregators. For instance, Arizona State University Vascular Plant Herbarium's specimen data consist of a live managed dataset within SEINet with subsets of their data pushed out to the Portal de Biodiversidad de Guatemala and the Cooperative Taxonomic Resource for American Myrtaceae Symbiota portals as snapshot record sets. Not only does this support research associated with each of the portal communities, it exposes the records to researchers with local and taxonomic expertise to review, correct, and comment on the occurrence data. While the Symbiota portals provide tools for these communities to annotate the distributed snapshot records, the annotations need to be directed back to the source collection. Aside from the technical challenges, there are social negotiations that need to be considered. Collection managers might not want to integrate external edits, or the collection might be understaffed without anyone to approve the information transfer. Issues associated with “round-tripping” back to the source are complicated. Nevertheless, global coordination is feasible through automatable data sharing agreements that enable efficient propagation and translation of biodiversity data across communities.Within this presentation, we will explore ways specimen and annotation data have been shared across the Symbiota portal network, as well as the associated technical and social challenges we have encountered. We will also present recent enhancements in tracking project metadata, data provenance, record annotations, and the establishment of a public API architecture. These developments are leveraged to regulate machine-to-machine annotation propagation to enhance interoperability by providing support for real-time transmission of occurrence annotations across the distributed network of Symbiota portals. By demonstrating methods and challenges associated with data sharing across the Symbiota portal network, we strive to contribute to the global discussion of data sharing, but more importantly, solicit input and direction from the greater community on how we can improve data sharing beyond the Symbiota network

    Poor predictive value of lower gastrointestinal alarm features in the diagnosis of colorectal cancer in 1981 patients in secondary care.

    No full text
    BACKGROUND: Clinicians are advised to refer patients with lower gastrointestinal (GI) alarm features for urgent colonoscopy to exclude colorectal cancer (CRC). However, the utility of alarm features is debated. AIMS: To assess whether performance of alarm features is improved by using a symptom frequency threshold to trigger referral, or by combining them into composite variables, including minimum age thresholds, as recommended by the National Institute for Health and Care Excellence (NICE). METHODS: We collected data prospectively from 1981 consecutive adults with lower GI symptoms. Assessors were blinded to symptom status. The reference standard to define CRC was histopathological confirmation of adenocarcinoma in biopsy specimens from a malignant-looking colorectal lesion. Controls were patients without CRC. Sensitivity, specificity, positive predictive values (PPVs) and negative predictive values were calculated for individual alarm features, as well as combinations of these. RESULTS: In identifying 47 (2.4%) patients with CRC, individual alarm features had sensitivities ranging from 11.1% (family history of CRC) to 66.0% (loose stools), and specificities from 30.5% (loose stools) to 75.6% (family history of CRC). Using higher symptom frequency thresholds improved specificity, but to the detriment of sensitivity. NICE referral criteria also had higher specificities and lower sensitivity, with PPVs above 4.8%. More than 80% of those with CRC met at least one of the NICE referral criteria. CONCLUSIONS: Using higher symptom frequency thresholds for alarm features improved specificity, but sensitivity was low. NICE referral criteria had PPVs above 4.8%, but sensitivities ranged from 2.2% to 32.6%, meaning many cancers would be missed
    corecore