80 research outputs found

    ExClaim: Explainable Neural Claim Verification Using Rationalization

    Full text link
    With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.Comment: Published at 2022 IEEE 29th ST

    An Adaptive Technique to Predict Heart Disease Using Hybrid Machine Learning Approach

    Get PDF
    cardiovascular disease is amongby far prevalent fatalities in today's society. Cardiovascular disease is extremely hard to predict using clinical data analysis. Machine learning (ML) hasproved to be useful for helping in judgement and predictions with the enormous amount data produced by the healthcare sectorbusiness. Furthermore, latest events in other IoT sectors have demonstrated that machine learning is used (IOT). Several studies have examined the use of MLa heart disease prediction. In this research, we describe a novel method that, by highlighting essential traits, can improvethe precision of heart disease prognosis. Numerous data combinations and well-known categorization algorithms are used to create the forecasting models. Using a decent accuracy of 88.7%, we raise the level of playusing a heart disease forecasting approach that incorporates a88.7% absolute certainty in a combination random forest and linear model. (HRFLM)

    Rationalization for Explainable NLP: A Survey

    Get PDF
    Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007-2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities

    Head-to-head Comparison of Relevant Cell Sources of Small Extracellular Vesicles for Cardiac Repair: Superiority of Embryonic Stem Cells

    Get PDF
    Small extracellular vesicles (sEV) derived from various cell sources have been demonstrated to enhance cardiac function in preclinical models of myocardial infarction (MI). The aim of this study was to compare different sources of sEV for cardiac repair and determine the most effective one, which nowadays remains limited. We comprehensively assessed the efficacy of sEV obtained from human primary bone marrow mesenchymal stromal cells (BM-MSC), human immortalized MSC (hTERT-MSC), human embryonic stem cells (ESC), ESC-derived cardiac progenitor cells (CPC), human ESC-derived cardiomyocytes (CM), and human primary ventricular cardiac fibroblasts (VCF), in in vitro models of cardiac repair. ESC-derived sEV (ESC-sEV) exhibited the best pro-angiogenic and anti-fibrotic effects in vitro. Then, we evaluated the functionality of the sEV with the most promising performances in vitro, in a murine model of MI-reperfusion injury (IRI) and analysed their RNA and protein compositions. In vivo, ESC-sEV provided the most favourable outcome after MI by reducing adverse cardiac remodelling through down-regulating fibrosis and increasing angiogenesis. Furthermore, transcriptomic, and proteomic characterizations of sEV derived from hTERT-MSC, ESC, and CPC revealed factors in ESC-sEV that potentially drove the observed functions. In conclusion, ESC-sEV holds great promise as a cell-free treatment for promoting cardiac repair following MI

    Human Action Recognition In Video Data For Surveillance Applications

    Get PDF
    Detecting human actions using a camera has many possible applications in the security industry. When a human performs an action, his/her body goes through a signature sequence of poses. To detect these pose changes and hence the activities performed, a pattern recogniser needs to be built into the video system. Due to the temporal nature of the patterns, Hidden Markov Models (HMM), used extensively in speech recognition, were investigated. Initially a gesture recognition system was built using novel features. These features were obtained by approximating the contour of the foreground object with a polygon and extracting the polygon's vertices. A Gaussian Mixture Model (GMM) was fit to the vertices obtained from a few frames and the parameters of the GMM itself were used as features for the HMM. A more practical activity detection system using a more sophisticated foreground segmentation algorithm immune to varying lighting conditions and permanent changes to the foreground was then built. The foreground segmentation algorithm models each of the pixel values using clusters and continually uses incoming pixels to update the cluster parameters. Cast shadows were identified and removed by assuming that shadow regions were less likely to produce strong edges in the image than real objects and that this likelihood further decreases after colour segmentation. Colour segmentation itself was performed by clustering together pixel values in the feature space using a gradient ascent algorithm called mean shift. More robust features in the form of mesh features were also obtained by dividing the bounding box of the binarised object into grid elements and calculating the ratio of foreground to background pixels in each of the grid elements. These features were vector quantized to reduce their dimensionality and the resulting symbols presented as features to the HMM to achieve a recognition rate of 62% for an event involving a person writing on a white board. The recognition rate increased to 80% for the &quotseen" person sequences, i.e. the sequences of the person used to train the models. With a fixed lighting position, the lack of a shadow removal subsystem improved the detection rate. This is because of the consistent profile of the shadows in both the training and testing sequences due to the fixed lighting positions. Even with a lower recognition rate, the shadow removal subsystem was considered an indispensable part of a practical, generic surveillance system

    Adaptive filter algorithms for channel equalization

    No full text
    Equalization techniques compensate for the time dispersion introduced bycommunication channels and combat the resulting inter-symbol interference (ISI) effect.Given a channel of unknown impulse response, the purpose of an adaptive equalizer is tooperate on the channel output such that the cascade connection of the channel and theequalizer provides an approximation to an ideal transmission medium. Typically,adaptive equalizers used in digital communications require an initial training period,during which a known data sequence is transmitted. A replica of this sequence is madeavailable at the receiver in proper synchronism with the transmitter, thereby making itpossible for adjustments to be made to the equalizer coefficients in accordance with theadaptive filtering algorithm employed in the equalizer design. This type of equalization isknown as Non-Blind equalization. However, in practical situations, it would be highlydesirable to achieve complete adaptation without access to a desired response. Clearly,some form of Blind equalization has to be built into the receiver design. Blind equalizerssimultaneously estimate the transmitted signal and the channel parameters, which mayeven be time-varying. The aim of the project is to study the performance of variousadaptive filter algorithms for blind channel equalization through computer simulations.Uppsatsnivå:

    Synthesis and Evaluation of Functionalized Benzoboroxoles as anti-Tuberculosis Agents

    No full text
    University of Minnesota M.S. thesis. October 2014. Major: Chemistry. Advisor: Venkatram Mereddy. 1 computer file (PDF); vii, 139 pages.Several aminobenzoboroxole derivatives have been prepared starting from o-boronobenzaldehyde. Nitration of the benzoboroxole followed by Pd-C mediated hydrogenation provided 6-aminobenzoboroxole. Using this amine as a common intermediate, numerous structurally interesting benzoboroxoles have been prepared employing aromatic bromination, N-alkylation, reductive amination, and N-amidation. 3-Substituted functionalized benzoboroxoles have been synthesized via Baylis-Hillman reaction followed by nitration and reduction which provided highly functionalized amino benzoboroxoles. These derivatives have been evaluated for their anti-tubercular activity on mycobacterium tuberculosis H37Rv using 7H9 and GAST protocols. Based on these studies, a potent benzoboroxole analog has also been identified for further development.Gurrapu, Shirisha. (2014). Synthesis and Evaluation of Functionalized Benzoboroxoles as anti-Tuberculosis Agents. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/185096

    IMPACT OF SOCIO-ECONOMIC STATUS ON HEALTH STATUS OF WOMEN BENEFICIARIES UNDER ICDS SCHEME

    No full text
    The current study assesses women socioeconomic status and its impact on their health. Methods and Materials: The researcher has adopted a descriptive research design for the current study, the sample size is 380 samples collected from the women beneficiaries under ICDS between the age group of 16-39 and residing in Nalgonda district of Telangana state, and the researcher has selected Multistage sampling technique of probability sampling method to obtain the sample. Structured interview schedules have been used, which include Socio-demographic profile, Modified Kuppuswamy Socio-Economic Scale 2019, Health status questionnaire HSQ 12, and WHO quality of life questionnaire.</p
    corecore