440 research outputs found

    Immunosuppression for liver transplantation in HCV-infected patients: Mechanism-based principles

    Get PDF
    We retrospectively analyzed 42 hepatitis C virus (HCV)-infected patients who underwent cadaveric liver transplantation under two strategies of immunosuppression: (1) daily tacrolimus (TAC) throughout and an initial cycle of high-dose prednisone (PRED) with subsequent gradual steroid weaning, or (2) intraoperative antithymocyte globulin (ATG) and daily TAC that was later space weaned. After 36 ± 4 months, patient and graft survival in the first group was 18/19 (94.7%) with no examples of clinically serious HCV recurrence. In the second group, the three-year patient survival was 12/23 (52%), and graft survival was 9/23 (39%); accelerated recurrent hepatitis was the principal cause of the poor results. The data were interpreted in the context of a recently proposed immunologic paradigm that is equally applicable to transplantation and viral immunity. In the framework of this paradigm, the disparate hepatitis outcomes reflected different equilibria reached under the two immunosuppression regimens between the relative kinetics of viral distribution (systemically and in the liver) and the slowly recovering HCV-specific T-cell response. As a corollary, the aims of treatment of the HCV-infected liver recipients should be to predict, monitor, and equilibrate beneficial balances between virus distribution and the absence of an immunopathologic antiviral T-cell response. In this view, favorable equilibria were accomplished in the nonweaned group of patients but not in the weaned group. In conclusion, since the anti-HCV response is unleashed when immunosuppression is weaned, treatment protocols that minimize disease recurrence in HCV-infected allograft recipients must balance the desire to reduce immunosuppression or induce allotolerance with the need to prevent antiviral immunopathology. Copyright © 2005 by the American Association for the Study of Liver Diseases

    Weaning of immunosuppression in long - Term liver transplant recipients

    Get PDF
    Seventy-two long-surviving liver transplant recipients were evaluated prospectively, including a baseline allograft biopsy for weaning off of immunosuppression. Thirteen were removed from candidacy because of chronic rejection (n=4), hepatitis (n=2), patient anxiety (n=5), or lack of cooperation by the local physician (n=2). The other 59, aged 12-68 years, had stepwise drug weaning with weekly or biweekly monitoring of liver function tests. Their original diagnoses were PBC (n=9), HCC (n=l), Wilson’s disease (n=4), hepatitides (n=15), Laennec’s cirrhosis (n=l), biliary atresia (n=16), cystic fibrosis (n=l), hemochromatosis (n=l), hepatic trauma (n=l), alpha-l-antitrypsin deficiency (n=9), and secondary biliary cirrhosis (n=l). Most of the patients had complications of long-term immunosuppression, of which the most significant were renal dysfunction (n=8), squamous cell carcinoma (n=2) or verruca vulgaris of skin (n=9), osteoporosis and/or arthritis (n=12), obesity (n=3), hypertension (n=ll), and opportunistic infections (n=2). When azathioprine was a third drug, it was stopped first. Otherwise, weaning began with prednisone, using the results of corticotropin stimulation testing as a guide. If adrenal insufficiency was diagnosed, patients reduced to <5 mg/day prednisone were considered off of steroids. The baseline agents (azathioprine, cyclospo-rine, or FK506) were then gradually reduced in monthly decrements. Complete weaning was accomplished in 16 patients (27.1%) with 3-19 months drug-free follow-up, is progressing in 28 (47.4%), and failed in 15 (25.4%) without graft losses or demonstrable loss of graft function from the rejections. This and our previous experience with self-weaned and other patients off of immunosuppression indicate that a significant percentage of appropriately selected long-surviving liver recipients can unknowingly achieve drug-free graft acceptance. Such attempts should not be contemplated until 5-10 years posttransplantation and then only with careful case selection, close monitoring, and prompt reinstitution of immunosuppression when necessary. © 1995 by Williams & Wilkins

    Preferences for treatment of Attention-Deficit/Hyperactivity Disorder (ADHD): a discrete choice experiment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>While there is an increasing emphasis on patient empowerment and shared decision-making, subjective values for attributes associated with their treatment still need to be measured and considered. This contribution seeks to define properties of an ideal drug treatment of individuals concerned with Attention-Deficit/Hyperactivity Disorder (ADHD). Because of the lack of information on patient needs in the decision-makers assessment of health services, the individuals' preferences often play a subordinate role at present. Discrete Choice Experiments offer strategies for eliciting subjective values and making them accessible for physicians and other health care professionals.</p> <p>Methods</p> <p>The evidence comes from a Discrete Choice Experiments (DCE) performed in 2007. After reviewing the literature about preferences of ADHS we conducted a qualitative study with four focus groups consisting of five to eleven ADHS-patients each. In order to achieve content validity, we aimed at collecting all relevant factors for an ideal ADHS treatment. In a subsequent quantitative study phase (n = 219), data was collected in an online or paper-pencil self-completed questionnaire. It included sociodemographic data, health status and patients' preferences of therapy characteristics using direct measurement (23 items on a five-point Likert-scale) as well as a Discrete-Choice-Experiment (DCE, six factors in a fold-over design).</p> <p>Results</p> <p>Those concerned were capable of clearly defining success criteria and expectations. In the direct assessment and the DCE, respondents attached special significance to the improvement of their social situation and emotional state (relative importance 40%). Another essential factor was the desire for drugs with a long-lasting effect over the day (relative importance 18%). Other criteria, such as flexibility and discretion, were less important to the respondents (6% and 9%, respectively).</p> <p>Conclusion</p> <p>Results point out that ADHD patients and their family members have clear ideas of their needs. This is especially important against the backdrop of present discussions in the healthcare sector on the relevance of patient reported outcomes (PROs) and shared decision-making. The combination of the methods used in this study offer promising strategies to elicit subjective values and making them accessible for health care professionals in a manner that drives health choices.</p

    Dialysis and pediatric acute kidney injury: choice of renal support modality

    Get PDF
    Dialytic intervention for infants and children with acute kidney injury (AKI) can take many forms. Whether patients are treated by intermittent hemodialysis, peritoneal dialysis or continuous renal replacement therapy depends on specific patient characteristics. Modality choice is also determined by a variety of factors, including provider preference, available institutional resources, dialytic goals and the specific advantages or disadvantages of each modality. Our approach to AKI has benefited from the derivation and generally accepted defining criteria put forth by the Acute Dialysis Quality Initiative (ADQI) group. These are known as the risk, injury, failure, loss, and end-stage renal disease (RIFLE) criteria. A modified pediatrics RIFLE (pRIFLE) criteria has recently been validated. Common defining criteria will allow comparative investigation into therapeutic benefits of different dialytic interventions. While this is an extremely important development in our approach to AKI, several fundamental questions remain. Of these, arguably, the most important are “When and what type of dialytic modality should be used in the treatment of pediatric AKI?” This review will provide an overview of the limited data with the aim of providing objective guidelines regarding modality choice for pediatric AKI. Comparisons in terms of cost, availability, safety and target group will be reviewed

    The young B-star quintuple system HD 155448

    Full text link
    Until now, HD 155448 has been known as a post-AGB star and listed as a quadruple system. In this paper, we study the system in depth and reveal that the B component itself is a binary and that the five stars HD 155448 A, B1, B2, C, and D likely form a comoving stellar system. From a spectroscopic analysis we derive the spectral types and find that all components are B dwarfs (A: B1V, B1: B6V, B2: B9V, C: B4Ve, D: B8V). Their stellar ages put them close to the ZAMS, and their distance is estimated to be ~2 kpc. Of particular interest is the C component, which shows strong hydrogen and forbidden emission lines at optical wavelengths. All emission lines are spatially extended in the eastern direction and appear to have a similar velocity shift, except for the [OI] line. In the IR images, we see an arc-like shape to the northeast of HD 155448 C. From the optical up to 10 micron, most circumstellar emission is located at distances between ~1.0 arcsec and 3.0 arcsec from HD 155448 C, while in the Q band the arc-like structure appears to be in contact with HD 155448 C. The Spitzer and VLT/VISIR mid-IR spectra show that the circumstellar material closest to the star consists of silicates, while polycyclic aromatic hydrocarbons (PAH) dominate the emission at distances >1 arcsec with bands at 8.6, 11.3, and 12.7 micron. We consider several scenarios to explain the unusual, asymmetric, arc-shaped geometry of the circumstellar matter. The most likely explanation is an outflow colliding with remnant matter from the star formation process.Comment: 19 pages, 12 figures, 9 tables. Accepted for publication in A&

    Digital Cranial Endocast of Hyopsodus (Mammalia, “Condylarthra”): A Case of Paleogene Terrestrial Echolocation?

    Get PDF
    We here describe the endocranial cast of the Eocene archaic ungulate Hyopsodus lepidus AMNH 143783 (Bridgerian, North America) reconstructed from X-ray computed microtomography data. This represents the first complete cranial endocast known for Hyopsodontinae. The Hyopsodus endocast is compared to other known “condylarthran” endocasts, i. e. those of Pleuraspidotherium (Pleuraspidotheriidae), Arctocyon (Arctocyonidae), Meniscotherium (Meniscotheriidae), Phenacodus (Phenacodontidae), as well as to basal perissodactyls (Hyracotherium) and artiodactyls (Cebochoerus, Homacodon). Hyopsodus presents one of the highest encephalization quotients of archaic ungulates and shows an “advanced version” of the basal ungulate brain pattern, with a mosaic of archaic characters such as large olfactory bulbs, weak ventral expansion of the neopallium, and absence of neopallium fissuration, as well as more specialized ones such as the relative reduction of the cerebellum compared to cerebrum or the enlargement of the inferior colliculus. As in other archaic ungulates, Hyopsodus midbrain exposure is important, but it exhibits a dorsally protruding largely developed inferior colliculus, a feature unique among “Condylarthra”. A potential correlation between the development of the inferior colliculus in Hyopsodus and the use of terrestrial echolocation as observed in extant tenrecs and shrews is discussed. The detailed analysis of the overall morphology of the postcranial skeleton of Hyopsodus indicates a nimble, fast moving animal that likely lived in burrows. This would be compatible with terrestrial echolocation used by the animal to investigate subterranean habitat and/or to minimize predation during nocturnal exploration of the environment

    The stellar and sub-stellar IMF of simple and composite populations

    Full text link
    The current knowledge on the stellar IMF is documented. It appears to become top-heavy when the star-formation rate density surpasses about 0.1Msun/(yr pc^3) on a pc scale and it may become increasingly bottom-heavy with increasing metallicity and in increasingly massive early-type galaxies. It declines quite steeply below about 0.07Msun with brown dwarfs (BDs) and very low mass stars having their own IMF. The most massive star of mass mmax formed in an embedded cluster with stellar mass Mecl correlates strongly with Mecl being a result of gravitation-driven but resource-limited growth and fragmentation induced starvation. There is no convincing evidence whatsoever that massive stars do form in isolation. Various methods of discretising a stellar population are introduced: optimal sampling leads to a mass distribution that perfectly represents the exact form of the desired IMF and the mmax-to-Mecl relation, while random sampling results in statistical variations of the shape of the IMF. The observed mmax-to-Mecl correlation and the small spread of IMF power-law indices together suggest that optimally sampling the IMF may be the more realistic description of star formation than random sampling from a universal IMF with a constant upper mass limit. Composite populations on galaxy scales, which are formed from many pc scale star formation events, need to be described by the integrated galactic IMF. This IGIMF varies systematically from top-light to top-heavy in dependence of galaxy type and star formation rate, with dramatic implications for theories of galaxy formation and evolution.Comment: 167 pages, 37 figures, 3 tables, published in Stellar Systems and Galactic Structure, Vol.5, Springer. This revised version is consistent with the published version and includes additional references and minor additions to the text as well as a recomputed Table 1. ISBN 978-90-481-8817-

    TRY plant trait database - enhanced coverage and open access

    Get PDF
    Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives
    corecore