71 research outputs found

    Reprint of:Negative symptoms predict high relapse rates and both predict less favorable functional outcome in first episode psychosis, independent of treatment strategy

    Get PDF
    Background: In first episode psychosis (FEP) baseline negative symptoms (BNS) and relapse both predict less favorable functional outcome. Relapse-prevention is one of the most important goals of treatment. Apart from discontinuation of antipsychotics, natural causes of relapse are unexplained. We hypothesized that BNS, apart from predicting worse functional outcome, might also increase relapse risk. Methods: We performed a post-hoc analysis of 7-year follow-up data of a FEP cohort (n = 103) involved in a dose-reduction/discontinuation (DR) vs. maintenance treatment (MT) trial. We examined: 1) what predicted relapse, 2) what predicted functional outcome, and 3) if BNS predicted relapse, whether MT reduced relapse rates compared to DR. After remission patients were randomly assigned to DR or MT for 18 months. Thereafter, treatment was uncontrolled. Outcomes: BNS and duration of untreated psychosis (DUP) predicted relapse. Number of relapses, BNS, and treatment strategy predicted functional outcome. BNS was the strongest predictor of relapse, while number of relapses was the strongest predictor of functional outcome above BNS and treatment strategy. Overall and within MT, but not within DR, more severe BNS predicted significantly higher relapse rates. Treatment strategies did not make a difference in relapse rates, regardless of BNS severity. Interpretation: BNS not only predicted worse functional outcome, but also relapses during follow-up. Since current low dose maintenance treatment strategies did not prevent relapse proneness in patients with more severe BNS, resources should be deployed to find optimal treatment strategies for this particular group of patients. (C) 2019 Elsevier B.V. All rights reserved

    Negative symptoms predict high relapse rates and both predict less favorable functional outcome in first episode psychosis, independent of treatment strategy

    Get PDF
    BACKGROUND: In first episode psychosis (FEP) baseline negative symptoms (BNS) and relapse both predict less favorable functional outcome. Relapse-prevention is one of the most important goals of treatment. Apart from discontinuation of antipsychotics, natural causes of relapse are unexplained. We hypothesized that BNS, apart from predicting worse functional outcome, might also increase relapse risk. METHODS: We performed a post-hoc analysis of 7-year follow-up data of a FEP cohort (n = 103) involved in a dose-reduction/discontinuation (DR) vs. maintenance treatment (MT) trial. We examined: 1) what predicted relapse, 2) what predicted functional outcome, and 3) if BNS predicted relapse, whether MT reduced relapse rates compared to DR. After remission patients were randomly assigned to DR or MT for 18 months. Thereafter, treatment was uncontrolled. OUTCOMES: BNS and duration of untreated psychosis (DUP) predicted relapse. Number of relapses, BNS, and treatment strategy predicted functional outcome. BNS was the strongest predictor of relapse, while number of relapses was the strongest predictor of functional outcome above BNS and treatment strategy. Overall and within MT, but not within DR, more severe BNS predicted significantly higher relapse rates. Treatment strategies did not make a difference in relapse rates, regardless of BNS severity. INTERPRETATION: BNS not only predicted worse functional outcome, but also relapses during follow-up. Since current low dose maintenance treatment strategies did not prevent relapse proneness in patients with more severe BNS, resources should be deployed to find optimal treatment strategies for this particular group of patients

    A Smart Screening Device for Patients with Mental Health Problems in Primary Health Care:Development and Pilot Study

    Get PDF
    BACKGROUND: Adequate recognition of mental health problems is a prerequisite for successful treatment. Although most people tend to consult their general practitioner (GP) when they first experience mental health problems, GPs are not very well equipped to screen for various forms of psychopathology to help them determine clients' need for treatment. OBJECTIVE: In this paper, the development and characteristics of CATja, a computerized adaptive test battery built to facilitate triage in primary care settings, are described, and first results of its implementation are reported. METHODS: CATja was developed in close collaboration with GPs and mental health assistants (MHAs). During implementation, MHAs were requested to appraise clients' rankings (N=91) on the domains to be tested and to indicate the treatment level they deemed most appropriate for clients before test administration. We compared the agreement between domain score appraisals and domain score computed by CATja and the agreement between initial (before test administration) treatment level advice and final treatment level advice. RESULTS: Agreements (Cohen kappas) between MHAs' appraisals of clients' scores and clients' scores computed by CATja were mostly between .40 and .50 (Cohen kappas=.10-.20), and the agreement between "initial" treatment levels and the final treatment level advised was .65 (Cohen kappa=.55). CONCLUSIONS: Using CATja, caregivers can efficiently generate summaries of their clients' mental well-being on which decisions about treatment type and care level may be based. Further validation research is needed

    Searching for the optimal number of response alternatives for the distress scale of the four-dimensional symptom questionnaire

    Get PDF
    BACKGROUND: The Four-Dimensional Symptom Questionnaire (4DSQ) is a self-report questionnaire designed to measure distress, depression, anxiety, and somatization. Prior to computing scale scores from the item scores, the three highest response alternatives ('Regularly', 'Often', and 'Very often or constantly present') are usually collapsed into one category to reduce the influence of extreme responding on item- and scale scores. In this study, we evaluate the usefulness of this transformation for the distress scale based on a variety of criteria. METHODS: Specifically, by using the Graded Response Model, we investigated the effect of this transformation on model fit, local measurement precision, and various indicators of the scale's validity to get an indication on whether the current practice of recoding should be advocated or not. In particular, the effect on the convergent- (operationalized by the General Health Questionnaire and the Maastricht Questionnaire), divergent- (operationalized by the Neuroticism scale of the NEO-FFI), and predictive validity (operationalized as obtrusion with daily chores and activities, the Biographical Problem list and the Utrecht Burnout Scale) of the distress scale was investigated. RESULTS: Results indicate that recoding leads to (i) better model fit as indicated by lower mean probabilities of exact test statistics assessing item fit, (ii) small (<.02) losses in the sizes of various validity coefficients, and (iii) a decrease (DIFF (SE's) = .10-.25) in measurement precision for medium and high levels of distress. CONCLUSIONS: For clinical applications and applications in longitudinal research, the current practice of recoding should be avoided because recoding decreases measurement precision for medium and high levels of distress. It would be interesting to see whether this advice also holds for the three other domains of the 4DSQ

    Identifying levels of general distress in first line mental health services:can GP- and eHealth clients' scores be meaningfully compared?

    Get PDF
    BACKGROUND: The Four-Dimensional Symptom Questionnaire (4DSQ) (Huisarts Wetenschap 39: 538-47, 1996) is a self-report questionnaire developed in the Netherlands to distinguish non-specific general distress from depression, anxiety, and somatization. This questionnaire is often used in different populations and settings and there is a paper-and-pencil and computerized version. METHODS: We used item response theory to investigate whether the 4DSQ measures the same construct (structural equivalence) in the same way (scalar equivalence) in two samples comprised of primary mental health care attendees: (i) clients who visited their General Practitioner responded to the 4DSQ paper-and-pencil version, and (ii) eHealth clients responded to the 4DSQ computerized version. Specifically, we investigated whether the distress items functioned differently in eHealth clients compared to General Practitioners' clients and whether these differences lead to substantial differences at scale level. RESULTS: Results showed that in general structural equivalence holds for the distress scale. This means that the distress scale measures the same construct in both General Practitioners' clients and eHealth clients. Furthermore, although eHealth clients have higher observed distress scores than General Practitioners' clients, application of a multiple group generalized partial credit response model suggests that scalar equivalence holds. CONCLUSIONS: The same cutoff scores can be used for classifying respondents as having low, moderate and high levels of distress in both settings

    Genetic properties of feed efficiency parameters in meat-type chickens

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Feed cost constitutes about 70% of the cost of raising broilers, but the efficiency of feed utilization has not kept up the growth potential of today's broilers. Improvement in feed efficiency would reduce the amount of feed required for growth, the production cost and the amount of nitrogenous waste. We studied residual feed intake (RFI) and feed conversion ratio (FCR) over two age periods to delineate their genetic inter-relationships.</p> <p>Methods</p> <p>We used an animal model combined with Gibb sampling to estimate genetic parameters in a pedigreed random mating broiler control population.</p> <p>Results</p> <p>Heritability of RFI and FCR was 0.42-0.45. Thus selection on RFI was expected to improve feed efficiency and subsequently reduce feed intake (FI). Whereas the genetic correlation between RFI and body weight gain (BWG) at days 28-35 was moderately positive, it was negligible at days 35-42. Therefore, the timing of selection for RFI will influence the expected response. Selection for improved RFI at days 28-35 will reduce FI, but also increase growth rate. However, selection for improved RFI at days 35-42 will reduce FI without any significant change in growth rate. The nature of the pleiotropic relationship between RFI and FCR may be dependent on age, and consequently the molecular factors that govern RFI and FCR may also depend on stage of development, or on the nature of resource allocation of FI above maintenance directed towards protein accretion and fat deposition. The insignificant genetic correlation between RFI and BWG at days 35-42 demonstrates the independence of RFI on the level of production, thereby making it possible to study the molecular, physiological and nutrient digestibility mechanisms underlying RFI without the confounding effects of growth. The heritability estimate of FCR was 0.49 and 0.41 for days 28-35 and days 35-42, respectively.</p> <p>Conclusion</p> <p>Selection for FCR will improve efficiency of feed utilization but because of the genetic dependence of FCR and its components, selection based on FCR will reduce FI and increase growth rate. However, the correlated responses in both FI and BWG cannot be predicted accurately because of the inherent problem of FCR being a ratio trait.</p

    Climate change impacts on banana yields around the world

    Get PDF
    This is the author accepted manuscript. The final version is available from Nature Research via the DOI in this r4ecordData availability: All data used are publicly available and open access. All banana production data sources are listed in Supplementary Table 1. All climatic and topographic data sources are listed in the Methods.Nutritional diversity is a key element of food security1,2,3. However, research on the effects of climate change on food security has, thus far, focused on the main food grains4,5,6,7,8, while the responses of other crops, particularly those that play an important role in the developing world, are poorly understood. Bananas are a staple food and a major export commodity for many tropical nations9. Here, we show that for 27 countries—accounting for 86% of global dessert banana production—a changing climate since 1961 has increased annual yields by an average of 1.37 t ha−1. Past gains have been largely ubiquitous across the countries assessed and African producers will continue to see yield increases in the future. However, global yield gains could be dampened or disappear, reducing to 0.59 t ha−1 and 0.19 t ha−1 by 2050 under the climate scenarios for Representative Concentration Pathways 4.5 and 8.5, respectively, driven by declining yields in the largest producers and exporters. By quantifying climate-driven and technology-driven influences on yield, we also identify countries at risk from climate change and those capable of mitigating its effects or capitalizing on its benefits.Biotechnology and Biological Sciences Research Council (BBSRC)European Union Horizon 202

    Higher fungal diversity is correlated with lower CO2 emissions from dead wood in a natural forest

    Get PDF
    Wood decomposition releases almost as much CO2 to the atmosphere as does fossil-fuel combustion, so the factors regulating wood decomposition can affect global carbon cycling. We used metabarcoding to estimate the fungal species diversities of naturally colonized decomposing wood in subtropical China and, for the first time, compared them to concurrent measures of CO2 emissions. Wood hosting more diverse fungal communities emitted less CO2, with Shannon diversity explaining 26 to 44% of emissions variation. Community analysis supports a ‘pure diversity’ effect of fungi on decomposition rates and thus suggests that interference competition is an underlying mechanism. Our findings extend the results of published experiments using low-diversity, laboratory-inoculated wood to a high-diversity, natural system. We hypothesize that high levels of saprotrophic fungal biodiversity could be providing globally important ecosystem services by maintaining dead-wood habitats and by slowing the atmospheric contribution of CO2 from the world’s stock of decomposing wood. However, large-scale surveys and controlled experimental tests in natural settings will be needed to test this hypothesis

    Bats in the anthropogenic matrix: Challenges and opportunities for the conservation of chiroptera and their ecosystem services in agricultural landscapes

    Get PDF
    Intensification in land-use and farming practices has had largely negative effects on bats, leading to population declines and concomitant losses of ecosystem services. Current trends in land-use change suggest that agricultural areas will further expand, while production systems may either experience further intensification (particularly in developing nations) or become more environmentally friendly (especially in Europe). In this chapter, we review the existing literature on how agricultural management affects the bat assemblages and the behavior of individual bat species, as well as the literature on provision of ecosystem services by bats (pest insect suppression and pollination) in agricultural systems. Bats show highly variable responses to habitat conversion, with no significant change in species richness or measures of activity or abundance. In contrast, intensification within agricultural systems (i.e., increased agrochemical inputs, reduction of natural structuring elements such as hedges, woods, and marshes) had more consistently negative effects on abundance and species richness. Agroforestry systems appear to mitigate negative consequences of habitat conversion and intensification, often having higher abundances and activity levels than natural areas. Across biomes, bats play key roles in limiting populations of arthropods by consuming various agricultural pests. In tropical areas, bats are key pollinators of several commercial fruit species. However, these substantial benefits may go unrecognized by farmers, who sometimes associate bats with ecosystem disservices such as crop raiding. Given the importance of bats for global food production, future agricultural management should focus on “wildlife-friendly” farming practices that allow more bats to exploit and persist in the anthropogenic matrix so as to enhance provision of ecosystem services. Pressing research topics include (1) a better understanding of how local-level versus landscape-level management practices interact to structure bat assemblages, (2) the effects of new pesticide classes and GM crops on bat populations, and (3) how increased documentation and valuation of the ecosystem services provided by bats could improve attitudes of producers toward their conservation

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
    corecore