27 research outputs found

    The Impact of Thyroid Cancer and Post-Surgical Radioactive Iodine Treatment on the Lives of Thyroid Cancer Survivors: A Qualitative Study

    Get PDF
    BACKGROUND: Adjuvant treatment with radioactive iodine (RAI) is often considered in the treatment of well-differentiated thyroid carcinoma (WDTC). We explored the recollections of thyroid cancer survivors on the diagnosis of WDTC, adjuvant radioactive iodine (RAI) treatment, and decision-making related to RAI treatment. Participants provided recommendations for healthcare providers on counseling future patients on adjuvant RAI treatment. METHODS: We conducted three focus group sessions, including WDTC survivors recruited from two Canadian academic hospitals. Participants had a prior history of WDTC that was completely resected at primary surgery and had been offered adjuvant RAI treatment. Open-ended questions were used to generate discussion in the groups. Saturation of major themes was achieved among the groups. FINDINGS: There were 16 participants in the study, twelve of whom were women (75%). All but one participant had received RAI treatment (94%). Participants reported that a thyroid cancer diagnosis was life-changing, resulting in feelings of fear and uncertainty. Some participants felt dismissed as not having a serious disease. Some participants reported receiving conflicting messages from healthcare providers on the appropriateness of adjuvant RAI treatment or insufficient information. If RAI-related side effects occurred, their presence was not legitimized by some healthcare providers. CONCLUSIONS: The diagnosis and treatment of thyroid cancer significantly impacts the lives of survivors. Fear and uncertainty related to a cancer diagnosis, feelings of the diagnosis being dismissed as not serious, conflicting messages about adjuvant RAI treatment, and treatment-related side effects, have been raised as important concerns by thyroid cancer survivors

    Alcohol use and misuse: What are the contributions of occupation and work organization conditions?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This research examines the specific contribution of occupation and work organization conditions to alcohol use and misuse. It is based on a social-action model that takes into account agent personality, structures of daily life, and macro social structures.</p> <p>Methods</p> <p>Data come from a representative sample of 10,155 workers in Quebec, Canada. Multinomial regression models corrected for sample design effect have been used to predict low-risk and high-risk drinking compared to non-drinkers. The contribution of occupation and work organization conditions (skill used, decision authority, physical and psychological demands, hours worked, irregular work schedule, harassment, unionization, job insecurity, performance pay, prestige) have been adjusted for family situation, social network outside the workplace, and individual characteristics.</p> <p>Results</p> <p>Compared to non-qualified blue-collars, both low-risk and high-risk drinking are associated with qualified blue-collars, semi-qualified white-collars, and middle managers; high-risk drinking is associated with upper managers. For constraints-resources related to work organization conditions, only workplace harassment is an important determinant of both low-risk and high-risk drinking, but it is modestly moderated by occupation. Family situation, social support outside work, and personal characteristics of individuals are also associated with alcohol use and misuse. Non-work factors mediated/suppressed the role of occupation and work organization conditions.</p> <p>Conclusion</p> <p>Occupation and workplace harassment are important factors associated with alcohol use and misuse. The results support the theoretical model conceptualizing alcohol use and misuse as being the product of stress caused by constraints and resources brought to bear simultaneously by agent personality, structures of daily life, and macro social structures. Occupational alcohol researchers must expand their theoretical perspectives to avoid erroneous conclusions about the specific role of the workplace.</p

    Identification of Host Genes Involved in Geminivirus Infection Using a Reverse Genetics Approach

    Get PDF
    Geminiviruses, like all viruses, rely on the host cell machinery to establish a successful infection, but the identity and function of these required host proteins remain largely unknown. Tomato yellow leaf curl Sardinia virus (TYLCSV), a monopartite geminivirus, is one of the causal agents of the devastating Tomato yellow leaf curl disease (TYLCD). The transgenic 2IRGFP N. benthamiana plants, used in combination with Virus Induced Gene Silencing (VIGS), entail an important potential as a tool in reverse genetics studies to identify host factors involved in TYLCSV infection. Using these transgenic plants, we have made an accurate description of the evolution of TYLCSV replication in the host in both space and time. Moreover, we have determined that TYLCSV and Tobacco rattle virus (TRV) do not dramatically influence each other when co-infected in N. benthamiana, what makes the use of TRV-induced gene silencing in combination with TYLCSV for reverse genetic studies feasible. Finally, we have tested the effect of silencing candidate host genes on TYLCSV infection, identifying eighteen genes potentially involved in this process, fifteen of which had never been implicated in geminiviral infections before. Seven of the analyzed genes have a potential anti-viral effect, whereas the expression of the other eleven is required for a full infection. Interestingly, almost half of the genes altering TYLCSV infection play a role in postranslational modifications. Therefore, our results provide new insights into the molecular mechanisms underlying geminivirus infections, and at the same time reveal the 2IRGFP/VIGS system as a powerful tool for functional reverse genetics studies

    Man and the Last Great Wilderness: Human Impact on the Deep Sea

    Get PDF
    The deep sea, the largest ecosystem on Earth and one of the least studied, harbours high biodiversity and provides a wealth of resources. Although humans have used the oceans for millennia, technological developments now allow exploitation of fisheries resources, hydrocarbons and minerals below 2000 m depth. The remoteness of the deep seafloor has promoted the disposal of residues and litter. Ocean acidification and climate change now bring a new dimension of global effects. Thus the challenges facing the deep sea are large and accelerating, providing a new imperative for the science community, industry and national and international organizations to work together to develop successful exploitation management and conservation of the deep-sea ecosystem. This paper provides scientific expert judgement and a semi-quantitative analysis of past, present and future impacts of human-related activities on global deep-sea habitats within three categories: disposal, exploitation and climate change. The analysis is the result of a Census of Marine Life – SYNDEEP workshop (September 2008). A detailed review of known impacts and their effects is provided. The analysis shows how, in recent decades, the most significant anthropogenic activities that affect the deep sea have evolved from mainly disposal (past) to exploitation (present). We predict that from now and into the future, increases in atmospheric CO2 and facets and consequences of climate change will have the most impact on deep-sea habitats and their fauna. Synergies between different anthropogenic pressures and associated effects are discussed, indicating that most synergies are related to increased atmospheric CO2 and climate change effects. We identify deep-sea ecosystems we believe are at higher risk from human impacts in the near future: benthic communities on sedimentary upper slopes, cold-water corals, canyon benthic communities and seamount pelagic and benthic communities. We finalise this review with a short discussion on protection and management methods

    Proteinuria and Albuminuria at Point of Care

    No full text
    Proteinuria is a key diagnostic and pathophysiological aspect of kidney dysfunction, influencing the progression of kidney and systemic diseases. Both general practitioners and specialists should be able to discriminate the relevance of proteinuria, starting from a urine sample, and eventually referring selected patients to a nephrologist for further diagnostic workup and treatment, because most kidney diseases are not symptomatic until renal function is lost or severely compromised. As the interpretation of proteinuria is dependent on the method used to detect it, the aim of this article was to review laboratory and point-of-care diagnostic methods for proteinuria in different settings, such as the prevention and follow-up of common chronic diseases (i.e., hypertension, diabetes, chronic kidney disease). Urine dipsticks remain the most widely used method for detecting proteinuria, although different types of proteinuria, extreme pH values and urine concentration may affect their results. Albumin to creatinine ratio and protein to creatinine ratio performed on a spot urine sample are reliable tests that can effectively replace 24-hour urine collection analysis in clinical practice
    corecore