629 research outputs found

    Ultrasound assessment of haemoperitoneum in ectopic pregnancy: derivation of a prediction model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To derive an ultrasound-based prediction model for the quantification of haemoperitoneum in ectopic pregnancy (EP).</p> <p>Methods</p> <p>Retrospective study of 89 patients operated upon EP between January 1999 and March 2003 in a French Gynaecology and Obstetrics department in a university hospital. Transvaginal sonograms, clinical and biological variables from patients with haemoperitoneum ≥ 300 ml at surgery were compared with those from patients with haemoperitoneum < 300 ml or no haemoperitoneum. Sensitivity, specificity, positive and negative likelihood ratios were calculated for each parameter after appropriate dichotomization. Multiple logistic regression analysis was used to select the best combination at predicting haemoperitoneum ≥ 300 ml.</p> <p>Results</p> <p>Three parameters predicted haemoperitoneum ≥ 300 ml independently: moderate to severe spontaneous pelvic pain, fluid above the uterine fundus or around the ovary at transvaginal ultrasound, and serum haemoglobin concentration < 10 g/dL. A woman with none of these three criteria would have a probability of 5.3% for haemoperitoneum ≥ 300 ml. When two or more criterias were present, the probability for haemoperitoneum ≥ 300 ml reached 92.6%.</p> <p>Conclusion</p> <p>The proposed model accurately predicted significant haemoperitoneum in patients diagnosed to have EP.</p

    A Unifying Model of Genome Evolution Under Parsimony

    Get PDF
    We present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. It conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join (DCJ) rearrangements in the presence of duplications. The problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. We show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and DCJ rearrangements needed to explain any history graph. These bounds become tight for a special type of unambiguous history graph called an ancestral variation graph (AVG), which constrains in its combinatorial structure the number of operations required. We finally demonstrate that for a given history graph GG, a finite set of AVGs describe all parsimonious interpretations of GG, and this set can be explored with a few sampling moves.Comment: 52 pages, 24 figure

    Mental health and well-being of older adults living with HIV in sub-Saharan Africa: a systematic review

    Get PDF
    Objective: In this systematic review, we aimed to summarise the empirical evidence on common mental disorders (CMDs), cognitive impairment, frailty and health-related quality of life (HRQoL) among people living with HIV aged ≥50 years (PLWH50 +) residing in sub-Saharan Africa (SSA). Specifically, we document the prevalence and correlates of these outcomes. Design, data sources and eligibility criteria: The following online databases were systematically searched: PubMed, CINAHL, PsycINFO, Embase and Scopus up to January 2021. English-language publications on depression, anxiety, cognitive function, frailty and quality of life among PLWH50+ residing in SSA were included. Data extraction and synthesis: We extracted information, including study characteristics and main findings. These were tabulated, and a narrative synthesis approach was adopted, given the substantial heterogeneity among included studies. Results: A total of 50 studies from fifteen SSA countries met the inclusion criteria. About two-thirds of these studies emanated from Ethiopia, Uganda and South Africa. Studies regarding depression predominated (n=26), followed by cognitive impairment (n=13). Overall, PLWH50+ exhibited varying prevalence of depression (6%–59%), cognitive impairments (4%–61%) and frailty (3%–15%). The correlates of CMDs, cognitive impairment, frailty and HRQoL were rarely investigated, but those reported were sociodemographic variables, many of which were inconsistent. Conclusions: This review documented an increasing number of published studies on HIV and ageing from SSA. However, the current evidence on the mental and well-being outcomes in PLWH50+ is inadequate to characterise the public health dimension of these impairments in SSA, because of heterogeneous findings, few well-designed studies and substantial methodological limitations in many of the available studies. Future work should have sufficiently large samples of PLWH50+, engage appropriate comparison groups, harmonise the measurement of these outcomes using a standardised methodology to generate more robust prevalence estimates and confirm predictors

    Sorting by reversals, block interchanges, tandem duplications, and deletions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Finding sequences of evolutionary operations that transform one genome into another is a classic problem in comparative genomics. While most of the genome rearrangement algorithms assume that there is exactly one copy of each gene in both genomes, this does not reflect the biological reality very well – most of the studied genomes contain duplicated gene content, which has to be removed before applying those algorithms. However, dealing with unequal gene content is a very challenging task, and only few algorithms allow operations like duplications and deletions. Almost all of these algorithms restrict these operations to have a fixed size.</p> <p>Results</p> <p>In this paper, we present a heuristic algorithm to sort an ancestral genome (with unique gene content) into a genome of a descendant (with arbitrary gene content) by reversals, block interchanges, tandem duplications, and deletions, where tandem duplications and deletions are of arbitrary size.</p> <p>Conclusion</p> <p>Experimental results show that our algorithm finds sorting sequences that are close to an optimal sorting sequence when the ancestor and the descendant are closely related. The quality of the results decreases when the genomes get more diverged or the genome size increases. Nevertheless, the calculated distances give a good approximation of the true evolutionary distances.</p

    Ameliorative Effects of Soya Bean Oil and Vitamin C on Liver Enzymes in Ethanol -Induced Oxidative Stress in Wistar Rats

    Get PDF
    Abstract: The protective potential of soya bean oil and vitamin C on Ethanol-induced oxidative stress i

    Efficient algorithms for analyzing segmental duplications with deletions and inversions in genomes

    Get PDF
    Background: Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics comprised of multiple duplicated fragments. This complex genomic organization complicates analysis of the evolutionary history of these sequences. One model proposed to explain this mosaic patterns is a model of repeated aggregation and subsequent duplication of genomic sequences. Results: We describe a polynomial-time exact algorithm to compute duplication distance, a genomic distance defined as the most parsimonious way to build a target string by repeatedly copying substrings of a fixed source string. This distance models the process of repeated aggregation and duplication. We also describe extensions of this distance to include certain types of substring deletions and inversions. Finally, we provide an description of a sequence of duplication events as a context-free grammar (CFG). Conclusion: These new genomic distances will permit more biologically realistic analyses of segmental duplications in genomes.

    Genome aliquoting with double cut and join

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The <it>genome aliquoting probem </it>is, given an observed genome <it>A </it>with <it>n </it>copies of each gene, presumed to descend from an <it>n</it>-way polyploidization event from an ordinary diploid genome <it>B</it>, followed by a history of chromosomal rearrangements, to reconstruct the identity of the original genome <it>B'</it>. The idea is to construct <it>B'</it>, containing exactly one copy of each gene, so as to minimize the number of rearrangements <it>d</it>(<it>A, B' </it>⊕ <it>B' </it>⊕ ... ⊕ <it>B'</it>) necessary to convert the observed genome <it>B' </it>⊕ <it>B' </it>⊕ ... ⊕ <it>B' </it>into <it>A</it>.</p> <p>Results</p> <p>In this paper we make the first attempt to define and solve the genome aliquoting problem. We present a heuristic algorithm for the problem as well the data from our experiments demonstrating its validity.</p> <p>Conclusion</p> <p>The heuristic performs well, consistently giving a non-trivial result. The question as to the existence or non-existence of an exact solution to this problem remains open.</p
    • …
    corecore