16 research outputs found

    Effects of a high-dose 24-h infusion of tranexamic acid on death and thromboembolic events in patients with acute gastrointestinal bleeding (HALT-IT): an international randomised, double-blind, placebo-controlled trial

    Get PDF
    Background: Tranexamic acid reduces surgical bleeding and reduces death due to bleeding in patients with trauma. Meta-analyses of small trials show that tranexamic acid might decrease deaths from gastrointestinal bleeding. We aimed to assess the effects of tranexamic acid in patients with gastrointestinal bleeding. Methods: We did an international, multicentre, randomised, placebo-controlled trial in 164 hospitals in 15 countries. Patients were enrolled if the responsible clinician was uncertain whether to use tranexamic acid, were aged above the minimum age considered an adult in their country (either aged 16 years and older or aged 18 years and older), and had significant (defined as at risk of bleeding to death) upper or lower gastrointestinal bleeding. Patients were randomly assigned by selection of a numbered treatment pack from a box containing eight packs that were identical apart from the pack number. Patients received either a loading dose of 1 g tranexamic acid, which was added to 100 mL infusion bag of 0·9% sodium chloride and infused by slow intravenous injection over 10 min, followed by a maintenance dose of 3 g tranexamic acid added to 1 L of any isotonic intravenous solution and infused at 125 mg/h for 24 h, or placebo (sodium chloride 0·9%). Patients, caregivers, and those assessing outcomes were masked to allocation. The primary outcome was death due to bleeding within 5 days of randomisation; analysis excluded patients who received neither dose of the allocated treatment and those for whom outcome data on death were unavailable. This trial was registered with Current Controlled Trials, ISRCTN11225767, and ClinicalTrials.gov, NCT01658124. Findings: Between July 4, 2013, and June 21, 2019, we randomly allocated 12 009 patients to receive tranexamic acid (5994, 49·9%) or matching placebo (6015, 50·1%), of whom 11 952 (99·5%) received the first dose of the allocated treatment. Death due to bleeding within 5 days of randomisation occurred in 222 (4%) of 5956 patients in the tranexamic acid group and in 226 (4%) of 5981 patients in the placebo group (risk ratio [RR] 0·99, 95% CI 0·82–1·18). Arterial thromboembolic events (myocardial infarction or stroke) were similar in the tranexamic acid group and placebo group (42 [0·7%] of 5952 vs 46 [0·8%] of 5977; 0·92; 0·60 to 1·39). Venous thromboembolic events (deep vein thrombosis or pulmonary embolism) were higher in tranexamic acid group than in the placebo group (48 [0·8%] of 5952 vs 26 [0·4%] of 5977; RR 1·85; 95% CI 1·15 to 2·98). Interpretation: We found that tranexamic acid did not reduce death from gastrointestinal bleeding. On the basis of our results, tranexamic acid should not be used for the treatment of gastrointestinal bleeding outside the context of a randomised trial

    Bearing Vibration Dataset of a Hydropower Project

    No full text
    The csv file contains turbine bearing vibration data acquired from the SCADA system of a 946MW hydro power project operating in Pakistan</p

    Relevance Classification of Flood-Related Tweets Using XLNET Deep Learning Model

    No full text
    Floods, being among nature\u27s most significant and recurring phenomena, profoundly impact the lives and properties of tens of millions of people worldwide. As a result of such events, social media structures like Twitter often emerge as the most essential channels for real-time information sharing. However, the total volume of tweets makes it hard to manually distinguish between those relating to floods and those that are not. This poses a large obstacle for responsible government officials who need to make timely and well-knowledgeable decisions. This study attempts to overcome this challenge by utilizing advanced techniques in natural language processing to effectively sort through the extensive volume of tweets. The outcome we obtained from this process is promising, as the XLNET model achieved an extraordinary F1 rating of 0.96. This high degree of overall performance illustrates the model’s usefulness in classifying flood-related tweets. By leveraging the abilities of the XLNET model, we aim to provide a valuable guide for responsible governance, aiding in making timely and well-informed choices during flood situations. This, in turn, will assist reduce the impact of floods on the lives and property-affected communities around the world

    Relevance Classification of Flood-Related Tweets Using XLNET Deep Learning Model

    No full text
    Floods, being among nature\u27s most significant and recurring phenomena, profoundly impact the lives and properties of tens of millions of people worldwide. As a result of such events, social media structures like Twitter often emerge as the most essential channels for real-time information sharing. However, the total volume of tweets makes it hard to manually distinguish between those relating to floods and those that are not. This poses a large obstacle for responsible government officials who need to make timely and well-knowledgeable decisions. This study attempts to overcome this challenge by utilizing advanced techniques in natural language processing to effectively sort through the extensive volume of tweets. The outcome we obtained from this process is promising, as the XLNET model achieved an extraordinary F1 rating of 0.96. This high degree of overall performance illustrates the model’s usefulness in classifying flood-related tweets. By leveraging the abilities of the XLNET model, we aim to provide a valuable guide for responsible governance, aiding in making timely and well-informed choices during flood situations. This, in turn, will assist reduce the impact of floods on the lives and property-affected communities around the world

    Stock Market Analysis and Prediction Using Deep Learning

    No full text
    The stock market is a complex system influenced by various factors, including economic indicators, geopolitical events, and investor sentiments. Traditional methods of stock market analysis often rely on statistical models and technical indicators, which may struggle to capture the intricate patterns and non-linear relationships present in financial data. This paper is about an innovative application which is designed to fill the gap between traditional stock market analysis and cutting-edge predictive modeling. The paper not only addresses the challenges associated with fragmented data and delayed analysis but also opens avenues for continuous monitoring and optimization of predictive models in response to dynamic market conditions. These models are seamlessly integrated into the application developed in the Analysis Phase, providing users with real-time predictions and valuable insights. Many machines learning (ML) and deep learning (DL) techniques have demonstrated to perform well in stock price prediction by prior research, and most people regard DL techniques them as one of the most accurate prediction methods, particularly when used for longer prediction ranges. In this research, after performing pre-processing steps like data normalization, we have employed an LSTM and GRU based models. Through training and testing, we determined the ideal settings for the optimizer, dropout, batch size, epochs, and other parameters. The outcome of comparing the LSTM network model with GRU we concluded that LSTM it is not suitable for short-term forecasting, and performs well for long-term forecasting whereas GRU performs well in both cases

    Stock Market Analysis and Prediction Using Deep Learning

    No full text
    The stock market is a complex system influenced by various factors, including economic indicators, geopolitical events, and investor sentiments. Traditional methods of stock market analysis often rely on statistical models and technical indicators, which may struggle to capture the intricate patterns and non-linear relationships present in financial data. This paper is about an innovative application which is designed to fill the gap between traditional stock market analysis and cutting-edge predictive modeling. The paper not only addresses the challenges associated with fragmented data and delayed analysis but also opens avenues for continuous monitoring and optimization of predictive models in response to dynamic market conditions. These models are seamlessly integrated into the application developed in the Analysis Phase, providing users with real-time predictions and valuable insights. Many machines learning (ML) and deep learning (DL) techniques have demonstrated to perform well in stock price prediction by prior research, and most people regard DL techniques them as one of the most accurate prediction methods, particularly when used for longer prediction ranges. In this research, after performing pre-processing steps like data normalization, we have employed an LSTM and GRU based models. Through training and testing, we determined the ideal settings for the optimizer, dropout, batch size, epochs, and other parameters. The outcome of comparing the LSTM network model with GRU we concluded that LSTM it is not suitable for short-term forecasting, and performs well for long-term forecasting whereas GRU performs well in both cases

    NEUROSCAN: Revolutionizing Brain Tumor Detection Using Vision-Transformer

    No full text
     Brain tumor detection is a pivotal component of neuroimaging, with significant implications for clinical diagnosis and patient care. In this study, we introduce an innovative deep-learning approach that leverages the cutting-edge Vision Transformer model, renowned for its ability to capture complex patterns and dependencies in images. Our dataset, consisting of 3000 images evenly split between tumor and non-tumor classes, serves as the foundation for our methodology. Employing Vision Transformer architecture, we processed high-resolution brain scans through patching and self-attention mechanisms. The model is trained through supervised learning to perform binary classification tasks. Our employed model achieved a high of 98.37% in tumor detection. While interpretability analysis was not explicitly performed, the inherent use of attention mechanisms in the Vision Transformer model suggests a focus on important brain regions and enhances its potential for prioritizing crucial information in brain tumor detection

    Visually: Assisting the Visually Impaired People Through AI-Assisted Mobility

    No full text
    This research introduces “Visually”, a revolutionary mobile application that aims to address the complications that visually impaired people come across in their daily lives. By deploying advanced deep learning models for real-time object detection, facial recognition, and currency identification with voice outputs for each feature, the “Visually” application strives to enhance the autonomy, independence, and mobility of visually impaired people. The system undergoes thorough training on a diverse dataset, incorporating augmentation techniques to enhance the robustness of the models. The project\u27s multifaceted objectives include a user-friendly interface, real-time object detection, multi-modal recognition, Text-to-Speech audio output, and an overarching aim of enriching the lives of visually impaired individuals. Driven by the global prevalence of visual impairment and the demand for cost-effective solutions, “Visually” is aligned with international efforts for accessibility and inclusivity. For cross-platform compatibility, the machine learning models have been integrated whilst being deployed with TensorFlow Lite. With Offline availability, the application ensures accessibility even in rural areas with limited network connectivity. To make a substantial societal impact "Visually" aims to contribute to a more inclusive and equitable society, by transforming the way visually impaired individuals navigate around the environment. Positioned at the intersection of technology, accessibility, and empowerment, the “Visually” project is poised to bring about positive change for a community that frequently encounters unique challenges in their daily lives

    Visually: Assisting the Visually Impaired People Through AI-Assisted Mobility

    No full text
    This research introduces “Visually”, a revolutionary mobile application that aims to address the complications that visually impaired people come across in their daily lives. By deploying advanced deep learning models for real-time object detection, facial recognition, and currency identification with voice outputs for each feature, the “Visually” application strives to enhance the autonomy, independence, and mobility of visually impaired people. The system undergoes thorough training on a diverse dataset, incorporating augmentation techniques to enhance the robustness of the models. The project\u27s multifaceted objectives include a user-friendly interface, real-time object detection, multi-modal recognition, Text-to-Speech audio output, and an overarching aim of enriching the lives of visually impaired individuals. Driven by the global prevalence of visual impairment and the demand for cost-effective solutions, “Visually” is aligned with international efforts for accessibility and inclusivity. For cross-platform compatibility, the machine learning models have been integrated whilst being deployed with TensorFlow Lite. With Offline availability, the application ensures accessibility even in rural areas with limited network connectivity. To make a substantial societal impact "Visually" aims to contribute to a more inclusive and equitable society, by transforming the way visually impaired individuals navigate around the environment. Positioned at the intersection of technology, accessibility, and empowerment, the “Visually” project is poised to bring about positive change for a community that frequently encounters unique challenges in their daily lives

    NEUROSCAN: Revolutionizing Brain Tumor Detection Using Vision-Transformer

    No full text
     Brain tumor detection is a pivotal component of neuroimaging, with significant implications for clinical diagnosis and patient care. In this study, we introduce an innovative deep-learning approach that leverages the cutting-edge Vision Transformer model, renowned for its ability to capture complex patterns and dependencies in images. Our dataset, consisting of 3000 images evenly split between tumor and non-tumor classes, serves as the foundation for our methodology. Employing Vision Transformer architecture, we processed high-resolution brain scans through patching and self-attention mechanisms. The model is trained through supervised learning to perform binary classification tasks. Our employed model achieved a high of 98.37% in tumor detection. While interpretability analysis was not explicitly performed, the inherent use of attention mechanisms in the Vision Transformer model suggests a focus on important brain regions and enhances its potential for prioritizing crucial information in brain tumor detection
    corecore