6 research outputs found

    Placental biometry for prediction of small for gestational age fetuses in low resource setting

    Get PDF
    Background: Small for gestational age refers to foetuses with birth weight less than tenth centile for gestational age. Such foetuses are at increased risk of intrauterine fatal demise in comparison to others. Placenta plays a central role in supporting foetal growth. Researchers have emphasized on three dimensional sonographic placental volumetry as a predictor of SGA. This study focussed on role of two dimensional Ultrasonographic placental measurement in predicting SGA foetuses.Methods: Prospective study was conducted at Department of Obstetrics and Gynecology, Maulana Azad Medical College from November 2013 to February 2015. In singleton pregnancies at 18-22 weeks of gestation, placental biometry (in two dimensions) was performed. Maximal Placental Diameter (MaxPD) and Maximal Placental Thickness (MaxPT) in two orthogonal planes was recorded. Mean Placental Diameter (MPD) and Mean Placental Thickness (MPT) were calculated. At the time of delivery, as per the birth weight the neonate was classified into appropriate for gestational age (AGA)/ SGA/ large for gestational age (LGA). MPD and MPT were analyzed as predictors of SGA.Results: Both the MaxPDs and MPD were significantly smaller in SGA pregnancies (all with p ≤ 0.001). Similarly, both the MaxPTs (p = 0.006 and p = 0.001) and MPT (p = 0.000) were significantly smaller in SGA pregnancies. The ROC curve for combined placental biometry had the maximum area under the curve (0.805).Conclusions: Placental measurements taken in mid-gestation are a valuable predictor of SGA. Measurement of placental diameter and thickness is quick and simple. This approach should be explored in future to develop a predictive model for growth restricted foetuses

    Bug Management Using Machine Learning

    Get PDF
    Automated tests of software can often independently log different bugs for the same underlying problem (root cause). Manually identifying duplicate bugs is a source of toil for engineers. A related problem of bug management is that of bug routing, e.g., determining the right team or person to route a bug report to for the purposes of debugging. This disclosure describes techniques for bug deduplication and bug routing based on machine learning (ML). Per the techniques, a binary machine classification model is trained to aggregate bugs with a common root cause. Bugs in a class of bugs with a common root cause are deduplicated, e.g., represented by just one of the multiple bugs in the class. Further, a multi-class ML model is trained to predict the right team for handling a new (incoming) bug

    Comparison of neuroimaging by CT and MRI and correlation with neurological presentation in eclampsia

    Get PDF
    Background: The objective of the study was to compare computed tomography (CT) and magnetic resonance imaging (MRI) findings of eclampsia patients with respect to neurological signs and symptoms.Methods: This is a prospective observational study, 25 patients of eclampsia were studied, statistical analysis was done by Fishers’ exact and chi square test.Results: All patients in our study presented with antepartum or intrapartum eclampsia with neurological features ranging from headache, altered consciousness to coma. On neuroimaging by MR transiently high T2 signal intensity in the cerebral cortex and sub cortical white matter was seen, including edema. With MR angiography generalized vasospasm was also seen in 40% cases. MRI was found to be co-relating more than CT with the neurological presentation and had 90% sensitivity and 100% sensitivity.Conclusions: Symptoms like visual blurring, loss of vision and ophthalmological signs in eclampsia suggest occipital lobe involvement. Magnetic resonance imaging abnormalities in eclampsia correlate well with clinical findings as compared to CT and can be better imaging modality in eclampsia patients

    Integrating Bug Deduplication in Software Development and Testing

    Get PDF
    A bug deduplicator identifies independently discovered bugs that have the same underlying cause. Deduplication of bugs reduces toil for the software team by reducing the number of bugs that developers need to examine. However, if a bug deduplicator incorrectly classifies a bug as a duplicate, human developers might ignore the bug, allowing it to escape to production. A tradeoff exists between toil reduction and risk tolerance. This disclosure describes techniques that enable a software team to trade off the effort to remove bugs (e.g., auto-close bugs so that humans save toil and time) against the risk of errors in a bug deduplicator. Custom settings and a confidence level that a bug is a duplicate are used to determine whether to log a particular bug, to log it with comments, etc. The techniques enable the embedding of a bug deduplicator at suitable locations within a software development toolchain. The performance of the bug deduplicator can be fine-tuned in real-time by an analysis of its true negative and false positive metrics
    corecore