38 research outputs found

    A comparative analysis of the findings of postmortem computed tomography scan and traditional autopsy in traumatic deaths: Is technology mutually complementing or exclusive?

    Get PDF
    Background: Postmortem examination is indispensable to ascertain the cause of an unnatural death and as such is mandatory by the law. From ages, traditional autopsy (TA) has proved its worth in establishing the cause of death in the deceased despite some inherent difficulties and challenges and has enjoyed an insurmountable status. The increasing use of application of the modern-day radiology for postmortem examination has however opened a new arena overcoming some of the difficulties of the TA. There are conflicting reports in the published literature regarding superiority of one modality of the postmortem over the other. Objective: The objective of this study was to compare the findings of postmortem computed tomography (CT) scan and TA in the victims of traumatic deaths and to analyze whether postmortem CT can be used to replace TA. Materials and Methods: All patients with a history of trauma that were declared brought dead on arrival in the emergency department were subjected to full-body CT scan. An experienced radiologist reported the findings of CT scan. Subsequently, a forensic expert subjected the patients to TA. The physician who performed autopsy was blinded to the findings of CT scan and vice versa. An individual who was not part of the radiology or forensic team then entered the findings of CT scan and autopsy in a predesigned Pro forma. An unbiased assessor finally compared the findings of the two modalities and analyzed the results. McNemar's test was used to ascertain the level of significance between the findings reported by these two modalities considering P = 0.05 as statistically significant. The agreement or disagreement on cause of death reported by these two modalities was also assessed. Results: About 95 of the deceased were males. The mean age of the corpses was 35 years (range 1667 years). CT was found superior in picking up most of the bony injuries, air-containing lesions, hemothorax, and hemoperitoneum. However, autopsy was found more sensitive for soft-tissue and solid visceral injuries. Both modalities were equally helpful in identifying extremity fractures. Statistically significant agreement (>95) on cause of death by both modalities was not achieved in any patient of trauma. Conclusion: Postmortem CT scan is promising in reporting injuries in traumatic deaths and can significantly complement the conventional autopsy. However, at present, it cannot be considered as a replacement for TA

    Pattern of Relapse and Treatment Response in WNT- Activated Medulloblastoma

    Get PDF
    Over the past decade, wingless-activated (WNT) medulloblastoma has been identified as a candidate for therapy de-escalation based on excellent survival; however, a paucity of relapses has precluded additional analyses of markers of relapse. To address this gap in knowledge, an international cohort of 93 molecularly confirmed WNT MB was assembled, where 5-year progression-free survival is 0.84 (95%, 0.763-0.925) with 15 relapsed individuals identified. Maintenance chemotherapy is identified as a strong predictor of relapse, with individuals receiving high doses of cyclophosphamide or ifosphamide having only one very late molecularly confirmed relapse (p = 0.032). The anatomical location of recurrence is metastatic in 12 of 15 relapses, with 8 of 12 metastatic relapses in the lateral ventricles. Maintenance chemotherapy, specifically cumulative cyclophosphamide doses, is a significant predictor of relapse across WNT MB. Future efforts to de-escalate therapy need to carefully consider not only the radiation dose but also the chemotherapy regimen and the propensity for metastatic relapses

    A deep learning odyssey on structured natural language processing

    Get PDF
    Availability of immense amount of unstructured text has surged the demand for intelligent natural language processing (NLP) systems. Specifically, these systems are expected to represent, extract, transfer and utilize knowledge in order to be successful on language related problems. In this work, we have directed our efforts to explore various forms of knowledge, especially linguistic and latent structures, available for a NLP system. Additionally, we investigate the significance of aforementioned structures in various language relevant tasks ranging from highly researched semantic matching to less explored abstract text summarization, and many more. In chapter 2, we study the interaction between explicit linguistic structures and implicit structure revealed by attention mechanisms. This investigation on various semantic matching datasets, especially question-answering, display significant patterns of substitutability between the two types of structures (explicit and implicit), and may affect other modalities. As a result, we develop a multi-view progressive attention mechanism which is general enough to operate on various linguistic structures of text. Moving further, to represent and extract latent structure from a collection of documents, we propose a family of vector-quantization-based topic models (VQ-TMs) in Chapter 3. Specifically, VQ-TMs consider dense and global topic embeddings which are homomorphic to word embeddings. Moreover, the family exploits vector-quantization (VQ) technique to promote discreteness in the overall architecture. Lastly, the learned topics are successfully transferred to a different task of code-generation. VQ-TMs exploit VQ technique while learning topics representation. Although VQ method aids in learning discrete topic embeddings, yet, the method is built upon a geometric intuition. Moreover, VQ-TMs utilizes the learned topic representations as reference knowledge in a downstream task. Some natural questions arise. Is it possible to exploit probabilistic intuition to infuse discreteness while learning topics representation? Additionally, can the learned topics be leveraged to control sentence generation and augment the performance in aspect-aware item recommendation? In order to push the envelope further, we explore these ideas in Chapter 4

    A deep learning odyssey on structured natural language processing

    Get PDF
    Availability of immense amount of unstructured text has surged the demand for intelligent natural language processing (NLP) systems. Specifically, these systems are expected to represent, extract, transfer and utilize knowledge in order to be successful on language related problems. In this work, we have directed our efforts to explore various forms of knowledge, especially linguistic and latent structures, available for a NLP system. Additionally, we investigate the significance of aforementioned structures in various language relevant tasks ranging from highly researched semantic matching to less explored abstract text summarization, and many more. In chapter 2, we study the interaction between explicit linguistic structures and implicit structure revealed by attention mechanisms. This investigation on various semantic matching datasets, especially question-answering, display significant patterns of substitutability between the two types of structures (explicit and implicit), and may affect other modalities. As a result, we develop a multi-view progressive attention mechanism which is general enough to operate on various linguistic structures of text. Moving further, to represent and extract latent structure from a collection of documents, we propose a family of vector-quantization-based topic models (VQ-TMs) in Chapter 3. Specifically, VQ-TMs consider dense and global topic embeddings which are homomorphic to word embeddings. Moreover, the family exploits vector-quantization (VQ) technique to promote discreteness in the overall architecture. Lastly, the learned topics are successfully transferred to a different task of code-generation. VQ-TMs exploit VQ technique while learning topics representation. Although VQ method aids in learning discrete topic embeddings, yet, the method is built upon a geometric intuition. Moreover, VQ-TMs utilizes the learned topic representations as reference knowledge in a downstream task. Some natural questions arise. Is it possible to exploit probabilistic intuition to infuse discreteness while learning topics representation? Additionally, can the learned topics be leveraged to control sentence generation and augment the performance in aspect-aware item recommendation? In order to push the envelope further, we explore these ideas in Chapter 4

    A deep learning odyssey on structured natural language processing

    No full text
    Availability of immense amount of unstructured text has surged the demand for intelligent natural language processing (NLP) systems. Specifically, these systems are expected to represent, extract, transfer and utilize knowledge in order to be successful on language related problems. In this work, we have directed our efforts to explore various forms of knowledge, especially linguistic and latent structures, available for a NLP system. Additionally, we investigate the significance of aforementioned structures in various language relevant tasks ranging from highly researched semantic matching to less explored abstract text summarization, and many more. In chapter 2, we study the interaction between explicit linguistic structures and implicit structure revealed by attention mechanisms. This investigation on various semantic matching datasets, especially question-answering, display significant patterns of substitutability between the two types of structures (explicit and implicit), and may affect other modalities. As a result, we develop a multi-view progressive attention mechanism which is general enough to operate on various linguistic structures of text. Moving further, to represent and extract latent structure from a collection of documents, we propose a family of vector-quantization-based topic models (VQ-TMs) in Chapter 3. Specifically, VQ-TMs consider dense and global topic embeddings which are homomorphic to word embeddings. Moreover, the family exploits vector-quantization (VQ) technique to promote discreteness in the overall architecture. Lastly, the learned topics are successfully transferred to a different task of code-generation. VQ-TMs exploit VQ technique while learning topics representation. Although VQ method aids in learning discrete topic embeddings, yet, the method is built upon a geometric intuition. Moreover, VQ-TMs utilizes the learned topic representations as reference knowledge in a downstream task. Some natural questions arise. Is it possible to exploit probabilistic intuition to infuse discreteness while learning topics representation? Additionally, can the learned topics be leveraged to control sentence generation and augment the performance in aspect-aware item recommendation? In order to push the envelope further, we explore these ideas in Chapter 4

    A deep learning odyssey on structured natural language processing

    No full text
    Availability of immense amount of unstructured text has surged the demand for intelligent natural language processing (NLP) systems. Specifically, these systems are expected to represent, extract, transfer and utilize knowledge in order to be successful on language related problems. In this work, we have directed our efforts to explore various forms of knowledge, especially linguistic and latent structures, available for a NLP system. Additionally, we investigate the significance of aforementioned structures in various language relevant tasks ranging from highly researched semantic matching to less explored abstract text summarization, and many more. In chapter 2, we study the interaction between explicit linguistic structures and implicit structure revealed by attention mechanisms. This investigation on various semantic matching datasets, especially question-answering, display significant patterns of substitutability between the two types of structures (explicit and implicit), and may affect other modalities. As a result, we develop a multi-view progressive attention mechanism which is general enough to operate on various linguistic structures of text. Moving further, to represent and extract latent structure from a collection of documents, we propose a family of vector-quantization-based topic models (VQ-TMs) in Chapter 3. Specifically, VQ-TMs consider dense and global topic embeddings which are homomorphic to word embeddings. Moreover, the family exploits vector-quantization (VQ) technique to promote discreteness in the overall architecture. Lastly, the learned topics are successfully transferred to a different task of code-generation. VQ-TMs exploit VQ technique while learning topics representation. Although VQ method aids in learning discrete topic embeddings, yet, the method is built upon a geometric intuition. Moreover, VQ-TMs utilizes the learned topic representations as reference knowledge in a downstream task. Some natural questions arise. Is it possible to exploit probabilistic intuition to infuse discreteness while learning topics representation? Additionally, can the learned topics be leveraged to control sentence generation and augment the performance in aspect-aware item recommendation? In order to push the envelope further, we explore these ideas in Chapter 4

    Derivation of the Basic Constants of Three-Phase Inductor Alternators in Terms of Winding Parameters

    No full text

    Transient Analysis of One Type of Inductor Alternator Under Unsymmetrical Short Circuits

    No full text

    Correlation of the severity of atopic dermatitis with growth retardation in pediatric age group

    No full text
    Evidence of growth retardation was recorded in cases of severe atopic dermatitis before the advent of corticosteroid therapy and can therefore be attributed to the disease. The present study is an attempt to confirm the validity of such claims in Indian scenario. A total of 100 children with atopic dermatitis and 100 age and sex matched controls were evaluated for growth status and compared at the department of pediatric dermatology, Institute of c0 hild h0 ealth. Significant proportion of children with atopic dermatitis showed growth retardation with respect to their weight and height as compared to the controls. Statistically significant association was observed to exist between growth retardation with both the surface area of involvement of the disease and the severity of atopic dermatitis while no such association was found to exist with either sex or personal history of atopy .Our study shows that growth retardation occurs more frequently in patients with atopic dermatitis as compared to nonatopic children from the same population. This shows that there may be an association between growth and the disease process. In our study we also found that the growth retardation occurring in cases of atopic dermatitis is associated with the severity of the disease as well as the surface area of involvement
    corecore