7,108 research outputs found

    Light attenuation characteristics of glacially-fed lakes

    Get PDF
    Transparency is a fundamental characteristic of aquatic ecosystems and is highly responsive to changes in climate and land use. The transparency of glacially-fed lakes may be a particularly sensitive sentinel characteristic of these changes. However, little is known about the relative contributions of glacial flour versus other factors affecting light attenuation in these lakes. We sampled 18 glacially-fed lakes in Chile, New Zealand, and the U.S. and Canadian Rocky Mountains to characterize how dissolved absorption, algal biomass (approximated by chlorophyll a), water, and glacial flour contributed to attenuation of ultraviolet radiation (UVR) and photosynthetically active radiation (PAR, 400–700 nm). Variation in attenuation across lakes was related to turbidity, which we used as a proxy for the concentration of glacial flour. Turbidity-specific diffuse attenuation coefficients increased with decreasing wavelength and distance from glaciers. Regional differences in turbidity-specific diffuse attenuation coefficients were observed in short UVR wavelengths (305 and 320 nm) but not at longer UVR wavelengths (380 nm) or PAR. Dissolved absorption coefficients, which are closely correlated with diffuse attenuation coefficients in most non-glacially-fed lakes, represented only about one quarter of diffuse attenuation coefficients in study lakes here, whereas glacial flour contributed about two thirds across UVR and PAR. Understanding the optical characteristics of substances that regulate light attenuation in glacially-fed lakes will help elucidate the signals that these systems provide of broader environmental changes and forecast the effects of climate change on these aquatic ecosystems

    The Relationship Between Alcohol-Related Content on Social Media and Alcohol Outcomes in Young Adults: A Scoping Review Protocol

    Get PDF
    This scoping review will examine the association between alcohol consumption and alcohol-related problems with alcohol-related social media engagement in young adults

    Pharmacokinetic profile of enrofloxacin and its metabolite ciprofloxacin in Asian house geckos (Hemidactylus frenatus) after single-dose oral administration of enrofloxacin

    Get PDF
    The pharmacokinetics of enrofloxacin and its active metabolite ciprofloxacin were determined following oral administration in 21 Asian house geckos (Hemidactylus frenatus) at a dose of 10 mg/kg. Changes in enrofloxacin and ciprofloxacin plasma concentrations were quantified at regular intervals over 72 h (1, 2, 6, 12, 24, 48, and 72 h). Samples were analysed by high-pressure liquid chromatography (HPLC) and the enrofloxacin pharmacokinetic data underwent a two-compartment analysis. Due to the limited ciprofloxacin plasma concentrations above the lower limit of quantification (LLOQ), the ciprofloxacin data underwent non-compartment analysis and the half-life was determined by the Lineweaver-Burke plot and analysis. The enrofloxacin and ciprofloxacin mean half-lives (t½) were 0.95 h (α) / 24.36 h (β), and 11.06 h respectively, area under the curve (AUC0-24h) were 60.56 and 3.14 µg/mL*h, respectively, maximum concentrations (Cmax) were 12.31 and 0.24 µg/mL, respectively, and time required to reach the Cmax (Tmax) were 1 and 2 h respectively. Enrofloxacin was minimally converted to the active metabolite ciprofloxacin, with ciprofloxacin concentrations contributing only 4.91% of the total fluoroquinolone concentrations (AUC0-24h). Based on the pharmacokinetic indices when using susceptibility breakpoints when determined at mammalian body temperature it is predicted that single oral administration of enrofloxacin (10 mg/kg) would result in plasma concentrations effective against susceptible bacterial species inhibited by an enrofloxacin MIC ≤ 0.5 µg/mL in vitro, but additional studies will be required to determine its efficacy in vivo

    Investigation into the utility of flying foxes as bioindicators for environmental metal pollution reveals evidence of diminished lead but significant cadmium exposure

    Get PDF
    Due to their large range across diverse habitats, flying-foxes are potential bioindicator species for environmental metal exposure. To test this hypothesis, blood spots, urine, fur, liver and kidney samples were collected from grey-headed flying-foxes (Pteropus poliocephalus) and black flying-foxes (P. alecto) from the Sydney basin, Australia. Concentrations of arsenic, cadmium, copper, lead, mercury and zinc and 11 other trace metals were determined using inductively coupled plasma mass spectrometry. As predicted, kidney and fur lead concentrations were lower compared to concentrations found in flying-foxes in the early 1990’s, due to reduced environmental lead emissions. Tissue cadmium concentrations in flying-foxes were higher compared to previous studies of flying-foxes and other bat species, suggesting that flying-foxes were exposed to unrecognized cadmium sources. Identification of these sources should be a focus of future research. Urine concentrations of arsenic, cadmium, mercury, and lead were proportional to kidney concentrations. Given that urine can be collected from flying-foxes without handling, this demonstrates that many flying-foxes can be assessed for metal exposure with relative ease. The analysis of blood spots was not viable because of variable metal concentrations in the filter paper used. Fur concentrations of metals correlated poorly with tissue concentrations at the low levels of metals found in this study, but fur could still be a useful sample if flying-foxes are exposed to high levels of metals. Lastly, heat inactivation had minimal impact on metal concentrations in kidney and liver samples and should be considered as a tool to protect personnel working with biohazardous samples

    Evidence of chronic cadmium exposure identified in the critically endangered Christmas Island flying-fox (Pteropus natalis)

    Get PDF
    The Christmas Island flying-fox (Pteropus natalis) is the last native mammal on Christmas Island and its population is in decline. Phosphate mining occurs across much of the eastern side of Christmas Island. The phosphate deposits are naturally rich in cadmium, and potentially other metals, which may be threatening the Christmas Island flying-fox population. To test this, concentrations of metals (cadmium, copper, iron, mercury, lead, and zinc) were measured in fur and urine collected from Christmas Island flying-foxes and interpreted concurrently with urinalysis and serum biochemistry data. In addition, metal concentrations in liver and kidney samples from two Christmas Island flying-foxes and associated histological findings from one of these individuals are reported. Fur cadmium concentrations were significantly higher in the Christmas Island flying-fox compared to concentrations found in flying-foxes in mainland Australia. Additionally, 30% of Christmas Island flying-foxes had urine cadmium concentrations exceeding maximum concentrations previously reported in flying-foxes in mainland Australia. Glucosuria and proteinuria were identified in two Christmas Island flying-foxes, suggestive of renal dysfunction. In one aged flying-fox, kidney cadmium concentrations were four-fold higher than toxic thresholds reported for domestic mammals. Microscopic evaluation of this individual identified bone lesions consistent with those described in laboratory animals with chronic cadmium poisoning. These results suggest that Christmas Island flying-foxes are being exposed to cadmium and identification of these sources is recommended as a focus of future research. Unexpectedly, urine iron concentrations in Christmas Island flying-foxes were higher compared to previous studies of Australian mainland flying-foxes, which suggests that urinary excretion of iron may be an important aspect of iron homeostasis in this species whose diet is iron rich

    Evaluation of treatment outcomes and associated factors among patients managed for tuberculosis in Vihiga County, 2012‐ 2015

    Get PDF
    Background: Tuberculosis (TB) treatment outcomes are used to evaluate program and patient success. Despite this, factors driving and sustaining high rates of poor TB treatment outcomes in Vihiga County are not well understood.Objective: To evaluate treatment outcomes and associated factors among patients managed for TB in Vihiga County between 2012 and 2015.Design: Descriptive cohort study.Setting: Vihiga County.Subjects: Notified TB patients >15years who were on drug susceptible TB treatment.Results: Of the 3288 eligible patients more than half were male 1961 (60%), 85% were from the public sector while 23% were over 45years. Among the TB patients, 2865 (87%) were successfully treated, 299 (9%) died and 124 (4%) had other poor treatment outcomes. On multivariate analysis, advancing age (Adjusted Odds Ratio (AOR) 3.3, 95% CI2.03‐5.38, P<0.001), HIV positive (AOR1.78, 95% CI1.27‐2.49, P0.001), previously treated (AOR1.78, 95% CI1.2‐2.49, P<0.001) and unknown HIV status (AOR 2.11, 95% CI 1.21‐3.68, P 0.008) increased the risk of death. TB patients with positive sputum results during initiation of treatment (AOR=0.68, CI=0.50‐0.94, P‐value 0.018) and those with normal body mass index (BMI) (AOR 0.37, 95% CI 0.24‐0.58, P<0.001), were less likely to die.Conclusion: While higher BMI and bacteriological confirmation reduced the risk of death, advancing age, unknown HIV status, HIV positive, being a previously treated TB case increased the risk of death. We recommend early and accurate diagnosis of TB cases, TB/HIV integration and active involvement of community health volunteers in TB management

    Presentation, Treatment, and Prognosis of Secondary Melanoma within the Orbit

    Get PDF
    BackgroundOcular melanoma is a rare but often deadly malignancy that arises in the uvea, conjunctiva, or orbit. Uveal melanoma is the most common type, with conjunctival melanoma being the second most frequently observed. Melanoma accounts for 5–10% of metastatic or secondary orbital malignancies, but only a minute proportion of primary orbital neoplasia. The aim of this study was to characterize the clinical presentation, treatment, and prognosis in patients presenting with melanoma metastatic to, or secondary within, the orbit.MethodsA retrospective cohort study of patients presenting to a tertiary referral orbital unit from 1982 to 2016 was performed. Eighty-nine patients with biopsy-proven diagnosis of melanoma within the orbit were included in the study. The clinical notes, radiological imaging, histology, surgical notes, and outcome data for the patients were reviewed. The main outcome measures of interest were the interval between primary malignant melanoma and orbital presentation, survival after orbital presentation, and clinical parameters (such as gender, age at presentation, and treatment approach).ResultsThe commonest primary source of tumor was choroidal melanoma, with conjunctival and cutaneous melanomas being relatively common; eyelid and naso-sinus tumors occurred in a few cases. The mean age at presentation with orbital disease was 65 years (31–97 years). The interval between primary malignancy and orbital disease (either local spread/recurrence or true metastatic disease) showed wide variability, with almost one-third of patients having orbital disease at the time of primary diagnosis, but others presenting many years later; indeed, the longest orbital disease-free interval was over 34 years. Twenty-three patients were considered to have had late orbital metastases—that is, at more than 36 months after primary tumor. The median survival following presentation with orbital involvement was 24 months. Patients with tumors of cutaneous origin had worst survival, whereas those with conjunctival tumors had the best prognosis.ConclusionA high index of suspicion for orbital recurrence should be maintained in any patient with prior history of melanoma, however distant the primary tumor is in site or time. Furthermore, giving a prognosis for orbital melanoma remains problematic due to highly variable survival, and further investigation will be necessary to understand the likely genetic basis of this phenomenon

    Letters and Corrections

    Get PDF
    Letters submitted for possible publication must be identified as such and submitted with a signed transfer-of-copyright form (see Life Expectancies: Population or Person? To the Editor: We applaud Sox and colleagues' analysis (1) of the role of exercise testing in coronary artery disease. We are concerned, however, with the interpretation of the results, specifically, how one assesses the average change in the life expectancy for an entire cohort as opposed to a change for a single individual. The authors state, for example, that for 60-year-old men with at least one risk factor, the average increase in life expectancy for the cohort is 17 days. However, given that the prevalence of disease is presumed to be 0.15, the average change in life expectancy for someone who actually has the disease is 17/0.15 or 113 days (and would be even higher for those with left main disease). This figure compares favorably with the 110 days of life gained by reducing diastolic blood pressure from 110 to 90 mm Hg in 60-year-old men with hypertension. We believe that many people do not appreciate the meaning of marginal differences in life expectancy and that there is great variation among judgments of what constitutes trivial extensions of life expectancy (2). The population-based number of 17 days may have a different meaning to some than the 113-day Framing effects are well known (3), and the authors (1) include a section on other ways to express results using a variant of the number-needing-treatment method of Laupacis and colleagues (4). We believe, however, that the average life expectancy gain of the cohort is an outcome measure that obscures information needed by patients to make decisions. We suggest that life expectancy gains for those with the disease should also be represented. In that way doctors and patients can best use population-based life expectancy figures for decision-making. In response: Screening and other diagnostic strategies are invariably applied to heterogeneous populations. There are always at least two subgroups: those with and those without the disease. In our report (1), the summary estimate of 17 days of life is a weighted average of the therapeutic outcomes for members of each group, where the weights are the probabilities of being in each of the groups. It would be misleading to present the larger figure for life gained (113 days for the 15% who prove to have disease) without reference to the fact that this estimate is conditional upon being in the small subgroup of participants who have disease. These groups could be divided further-for example, into subgroups of diseased patients based on their coronary anatomy. Then the benefit in the highest-risk subgroup would be even greater. However, all who are screened, including the majority who do not benefit, bear the costs and risks of the strategy. As an extreme example, a strategy of monthly computed tomographic scans to detect lung cancer might yield large benefits for the few subjects who are spared death, but to assess the value of such a strategy we must include the impact upon the many people who do not have disease and would not be saved (and, indeed, could be harmed) by the strategy. We agree that the way results are presented can influence the way they are interpreted. Readers who prefer more detailed analysis or alternative presentations of results can find in our article (1) the probabilities and life expectancies associated with each branch of the decision tree. Because the results and assumptions of our analysis are described explicitly, our findings are subject to sensitivity analysis, review, revision, and debate. We feel that the cost-effectiveness ratio is the most informative, succinct summary of the results of the analysis. However, a summary estimate is just that: a summary. We hope that decision-makers will take advantage of the data presented in our analysis to inform their decision-making to the level of detail that best suits their own purposes. Alan Cardiac Rehabilitation Services and Risk Reduction To the Editor: The review by Drs. Greenland and Chu (1) on cardiac rehabilitation services and the accompanying position paper (2) developed by the Health and Policy Committee, American College of Physicians base their discussions of program components on a definition of cardiac rehabilitation that focuses on the restorative function of this intervention. Although programs for cardiac rehabilitation were originally based on a restorative model of care Also of concern are the conclusions regarding exclusion criteria reached by the Health and Public Policy Committee in the accompanying position paper (2), which was also authored by Drs. Greenland and Chu. With recent angiographic evidence of atherosclerotic regression in the coronary, carotid, and peripheral circulations of human subjects who have lowered their serum cholesterol levels dramatically (5), it seems limiting to suggest that only those persons with a demonstrated "cardiac-related disability in physical capacity" are appropriate candidates for cardiac rehabilitative effort (2). In its concluding remarks, the Committee proposes a judicious selection of candidates for the estimated $108 million spent annually on cardiac rehabilitation efforts. This sum is minimal, however, when compared with the billions of dollars spent on palliation of the vascular complications of atherosclerosis, which have been shown to be modified, if not avoided, by the risk-factor modification efforts of cardiac rehabilitation programs. All would agree that controlling the growth of the national health-care budget is long overdue, yet limiting the growth of cardiac rehabilitation programs rather than increasing programs to reduce risks for the population at large and for those already affected by coronary heart disease can only increase the burden of atherosclerotic diseases and the costs of their "high tech" treatments. population of patients. I concur with Dr. Downing that a rationale exists upon which physicians and other health-care professionals might offer services for risk-factor reduction to many groups of patients not covered specifically by our critique of cardiac rehabilitation services. However, in the review (1) we stated clearly that our analysis was based on a "critical review of the published articles on the benefits and risks of cardiac rehabilitation services. . . with primary emphasis on the role of cardiac rehabilitation after myocardial infarction." We also stated that "many survivors of myocardial infarction could theoretically benefit from organized attempts to help them stop smoking, lower their blood lipids, and control hypertension or other standard risk factors." The question we intended to address in our review was whether the published medical literature supports the application of specialized cardiac rehabilitation services as an especially effective means of reducing cardiac risk factors in patients with coronary artery disease-not whether such treatments could be of theoretical benefit to this group of patients. As we noted, there is no body of published evidence to support the routine addition of treatments for riskfactor reduction to organized cardiac rehabilitation services for survivors of myocardial infarction. For that matter, I could find no evidence that routine efforts at risk-factor reduction in the form of organized programs are preferable to other forms of medical care for primary or secondary prevention of coronary artery disease. Consequently, our conclusions could not advocate or support the use of such treatments, even though, as Dr. Downing points out, such efforts have theoretical appeal. Jill Downing, MD I applaud the efforts of those interested in primary, secondary, or tertiary prevention of coronary artery disease and challenge clinical scientists to promote research that will support the addition of such clinical services on a more routine basis in the future. Philip Greenland, MD SI and Presently Conventional Units To the Editor: Having grown up in the era of metric units, I am pleased to see that medical literature is finally adopting SI units (le Systeme internationale d'Unites). It is especially fitting now that research has clearly become more international, with workers from many countries often laboring on the same projects simultaneously. Problems always emerge with any attempt at modernization, however, including the chance that those unfamiliar with the new systems will be left behind. This particular problem surfaced for me after reading the article (1) on methylprednisolone therapy in alcoholic hepatitis by Carithers and colleagues. The study design, methods, eligibility criteria, and results were clearly stated, and the article was well worth clipping and saving. As I was putting it in my file, however, I tried mentally to compare the patients featured with my own. Just how severe was the hepatitis described? Was the bilirubin concentration 5 times normal? 20 times normal? No normal values were given in the new SI units. It is a simple matter to convert miles to kilometers, and pounds to kilograms, but to convert SGOT levels from international units per litre to microkatals per litre, bilirubin concentrations from micromoles to milligrams per decilitre, and so on, requires the memorization of molecular weights and catalytic constants. It simply discourages the comparison of pre-SI with post-SI patients. To make matters worse, the patients featured in this article (1) were accrued from 1979 to 1984, a period in which no one in American medicine used SI units. Thus, the data must have been converted to SI format for publication, and the conversion factors then removed from the manuscript. There is no reason to close off the literature from those readers whose laboratory slips may still express values in milligrams per decilitre. Simply print the normal ranges for the new units when they are first mentioned in the article, and even the most Neanderthal of medical readers will have some point of reference from which to begin counting on his (opposable) thumbs. We do offer authors the option of including present metric units with SI units, and only a fraction of authors ask for it. Dr. Giacoppe's view is reasonable, and we shall suggest to authors that they include non-SI units, at least for measurements that are central evidence for a paper's main conclusion. -The Editor The Metaraminol Test and Adverse Cardiac Effects To the Editor: Familial Mediterranean fever (FMF) is a hereditary disorder of unknown cause. The diagnosis is not difficult when a family history is relevant and diagnostic criteria are met (1, 2). Barakat and colleagues A 38-year-old woman had a family history of familial Mediterranean fever and occasionally had diffuse abdominal tenderness. She reported no history of fever, chest pain, arthralgia, or skin manifestations, and she did not have hypertension or cardiac disease. In view of a possible diagnosis of familial Mediterranean fever, and with informed consent of the patient, a metaraminol test was done. A baseline, standard electrocardiogram (ECG) was obtained, and supine blood pressure, pulse rate, and temperature were recorded. Throughout the test period the patient was monitored. An intravenous infusion of normal saline, 500 mL, to which was added a 10-mg dose of metaraminol bitartrate, was given for 4 hours. Thirty minutes after beginning the test, the patient had chest pain with coronary characteristics and palpitations. An ECG showed a bigeminal rhythm. The metaraminol infusion was discontinued, and 5 minutes later the patient was asymptomatic, and the ECG was normal. A week later, an exercise ECG was negative. Although Barakat and colleagues (4) did not report any serious side effects in their experience with metaraminol tests (80 cases), we agree with Cattan and colleagues (5) that this test is not harmless and that it should not be used unless absolutely necessary, possibly in patients with paucisymptomatic forms of lateonset familial Mediterranean fever who do not have a relevant family history. We think the criteria established by Sohar and colleagues (1) and Eliakim and colleagues (2) and a family history are sufficient for the diagnosis of most cases. Collagenous Colitis and Histiocytic Lymphoma To the Editor: Collagenous colitis is a relatively rare cause of watery diarrhea and abdominal pain and is characterized histologically by a thickened band of collagen beneath the colonic mucosa epithelium. I report a case of collagenous colitis associated with diffuse histiocytic lymphoma, which responded to sulfasalazine and steroid enemas, even as lymphoma progressed. A 78-year-old woman with a 1-year history of diarrhea was admitted in February 1988 for a presumed stroke with mild aphasia. In the past year she had had four to five loose stools per day. At admission, cultures of the stool and examination for ova and parasites were negative. Colonoscopy was done and was remarkable only for decreased haustral markings. A random biopsy sample showed thickened subepithelial collagen deposition consistent with collagenous colitis. Staining of the biopsy sample was negative for iron and amyloid. She was treated with sulfasalazine and steroid enemas with resolution of her diarrhea. Magnetic resonance imaging of her head showed a parietal lesion, a biopsy specimen of which showed diffuse histiocytic lymphoma. She was not considered a candidate for chemotherapy and had skin-flap closure with palliative cranial radiation therapy. Progression of her lymphoma was manifested by increased cervical lymphadenopathy. She died, and an autopsy was not done. A patient with Hodgkin lymphoma and collagenous colitis has been described (1). In this case, collagenous colitis was thought to reflect a paraneoplastic phenomenon. The patient's diarrhea showed significant improvement after both treatment with prednisone-based chemotherapy and clinical improvement of her lymphoma. Collagenous colitis can have a variable course (2), and spontaneous resolution without therapy has been reported (3). Therefore, it is difficult to assess treatment success in patients with collagenous colitis. The patient with Hodgkin lymphoma had been treated with a corticosteroid, which has been used successfully in the past for treating collagenous colitis (2, 4, 5). It is unclear whether the collagenous colitis improved because of the prednisone or because of the improvement of the lymphoma after chemotherapy. A paraneoplastic phenomenon, however, would not be a consideration in my patient because her diarrhea resolved after treatment with sulfasalazine and local steroid enemas. Sulfasalazine has not been shown to have any effect on lymphomas. Although there may be systemic absorption of the steroid from the enemas, the patient's diarrhea improved even as lymphoma progressed. Collagenous colitis is a rare disease and its exact incidence has yet to be determined. The finding of lymphoma in two patients with collagenous colitis may suggest that an association exists. DavidB. Edwards, MD Pancytopenia and Methotrexate with TrimethoprimSulfamethoxazole To the Editor: Kozarek and colleagues (1) found a dramatic clinical improvement in patients treated with methotrexate who had refractory Crohn colitis and an incomplete remission of chronic ulcerative colitis. Of 21 patients, 14 were also receiving either sulfasalazine or metronidazole (exact number of patients not mentioned). The risk of bone marrow suppression is increased when other antifolate drugs (derivatives of sulfonamides, trimethoprim) are used simultaneously with methotrexate. Besides additive folate antagonism, other pharmacologic mechanisms, such as competition with tubular secretion and displacement from albumin binding sites, play an important role in interactions of sulfonamides and methotrexate. Moreover, it was shown that sulfasalazine inhibits the hydrolysis of polyglutamyl folate and the intestinal transport of folate in patients with ulcerative colitis (2). Pancytopenia due to the combined use of methotrexate and trimethoprim-sulfamethoxazole has been reported in two patients with rheumatoid arthritis (3, 4). We report two additional cases of this severe side effect. In case 1, an 81-year-old woman had refractory rheumatoid arthritis and impaired renal function (creatinine, 166 jixmol/L) and was treated with methotrexate, 5 mg weekly for 6 weeks. Cystitis {Escherichia coli) was treated with trimethoprim, 300 mg daily. One week after starting trimethoprim, bone marrow suppression developed (leukocytes, 1.9 X 10VL; platelets, 15 X 10VL; hemoglobin, 6.3 mmol/L). Both methotrexate and trimethoprim were discontinued. Blood cell counts returned to normal in 2 weeks. One month after discharge she died of severe bronchopneumonia (determined at autopsy). In case 2, a 7 5-year-old woman with refractory rheumatoid arthritis and impaired renal function (estimated creatinine clearance, 40 mL/min) was receiving methotrexate, 5 mg weekly. A recurrent cystitis was treated with trimethoprim-sulfamethoxazole. Shortly after beginning trimethoprim-sulfamethoxazole, bone marrow suppression developed (hemoglobin, 5.6 mmol/L; leukocytes, 1.6 X 10 9 /L; platelets, 23 X 10VL). A bone marrow biopsy specimen showed hypocellularity. Both drugs were discontinued, and therapy with leucovorin was begun; she recovered in several weeks. These two patients were not treated with the combination of sulfasalazine and methotrexate; however, other antifolate drugs were used in conjunction with methotrexate. Additive folate antagonism, independent of which antifolate drug was used simultaneously with methotrexate, seemed to play a central role in inducing bone marrow suppression in these patients. We do not recommend prescribing other drugs with antifolate action simultaneously with methotrexate. The toxicity of and the possibility of adverse drug interactions with methotrexate are increased in the presence of other risk factors such as old age, hypalbuminemia, impaired renal function, and decreased bone marrow reserve (5). Acknowledgment: We thank Drs. J. Rasker, W. Hissink Muller, and J. Haverman for allowing us access to their patients

    Effect of 3 to 5 years of scheduled CEA and CT follow-up to detect recurrence of colorectal cancer The FACS Randomized Clinical Trial

    Get PDF
    IMPORTANCE Intensive follow-up after surgery for colorectal cancer is common practice but is based on limited evidence. OBJECTIVE To assess the effect of scheduled blood measurement of carcinoembryonic antigen (CEA) and computed tomography (CT) as follow-up to detect recurrent colorectal cancer treatable with curative intent. DESIGN, SETTING, AND PARTICIPANTS Randomized clinical trial in 39 National Health Service hospitals in the United Kingdom; 1202 eligible participants were recruited between January 2003 and August 2009 who had undergone curative surgery for primary colorectal cancer, including adjuvant treatment if indicated, with no evidence of residual disease on investigation. INTERVENTIONS Participants were randomly assigned to 1 of 4 groups: CEA only (n = 300), CT only (n = 299), CEA+CT (n = 302), or minimum follow-up (n = 301). Blood CEA was measured every 3 months for 2 years, then every 6 months for 3 years; CT scans of the chest, abdomen, and pelvis were performed every 6 months for 2 years, then annually for 3 years; and the minimum follow-up group received follow-up if symptoms occurred. MAIN OUTCOMES AND MEASURES The primary outcome was surgical treatment of recurrence with curative intent; secondary outcomes were mortality (total and colorectal cancer), time to detection of recurrence, and survival after treatment of recurrence with curative intent. RESULTS After a mean 4.4 (SD, 0.8) years of observation, cancer recurrence was detected in 199 participants (16.6%; 95% CI, 14.5%-18.7%) overall; 71 of 1202 participants (5.9%; 95% CI, 4.6%-7.2%) were treated for recurrence with curative intent, with little difference according to Dukes staging (stage A, 5.1% [13/254]; stage B, 6.1% [34/553]; stage C, 6.2% [22/354]). Surgical treatment of recurrence with curative intent was 2.3% (7/301) in the minimum follow-up group, 6.7% (20/300) in the CEA group, 8% (24/299) in the CT group, and 6.6% (20/302) in the CEA+CT group. Compared with minimum follow-up, the absolute difference in the percentage of patients treated with curative intent in the CEA group was 4.4% (95% CI, 1.0%-7.9%; adjusted odds ratio [OR], 3.00; 95% CI, 1.23-7.33), in the CT group was 5.7% (95% CI, 2.2%-9.5%; adjusted OR, 3.63; 95% CI, 1.51-8.69), and in the CEA+CT group was 4.3% (95% CI, 1.0%-7.9%; adjusted OR, 3.10; 95% CI, 1.10-8.71). The number of deaths was not significantly different in the combined intensive monitoring groups (CEA, CT, and CEA+CT; 18.2% [164/901]) vs the minimum follow-up group (15.9% [48/301]; difference, 2.3%; 95% CI, −2.6% to 7.1%). CONCLUSIONS AND RELEVANCE Among patients who had undergone curative surgery for primary colorectal cancer, intensive imaging or CEA screening each provided an increased rate of surgical treatment of recurrence with curative intent compared with minimal follow-up; there was no advantage in combining CEA and CT. If there is a survival advantage to any strategy, it is likely to be small. TRIAL REGISTRATION isrctn.org Identifier: 4145854

    Optical angular momentum: Multipole transitions and photonics

    Get PDF
    The premise that multipolar decay should produce photons uniquely imprinted with a measurably corresponding angular momentum is shown in general to be untrue. To assume a one-to-one correlation between the transition multipoles involved in source decay and detector excitation is to impose a generally unsupportable one-to-one correlation between the multipolar form of emission transition and a multipolar character for the detected field. It is specifically proven impossible to determine without ambiguity, by use of any conventional detector, and for any photon emitted through the nondipolar decay of an atomic excited state, a unique multipolar character for the transition associated with its generation. Consistent with the angular quantum uncertainty principle, removal of a detector from the immediate vicinity of the source produces a decreasing angular uncertainty in photon propagation direction, reflected in an increasing range of integer values for the measured angular momentum. In such a context it follows that when the decay of an electronic excited state occurs by an electric quadrupolar transition, for example, any assumption that the radiation so produced is conveyed in the form of “quadrupole photons” is experimentally unverifiable. The results of the general proof based on irreducible tensor analysis invite experimental verification, and they signify certain limitations on quantum optical data transmission
    corecore