21 research outputs found

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P < 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely

    Measuring routine childhood vaccination coverage in 204 countries and territories, 1980-2019 : a systematic analysis for the Global Burden of Disease Study 2020, Release 1

    Get PDF
    Background Measuring routine childhood vaccination is crucial to inform global vaccine policies and programme implementation, and to track progress towards targets set by the Global Vaccine Action Plan (GVAP) and Immunization Agenda 2030. Robust estimates of routine vaccine coverage are needed to identify past successes and persistent vulnerabilities. Drawing from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2020, Release 1, we did a systematic analysis of global, regional, and national vaccine coverage trends using a statistical framework, by vaccine and over time. Methods For this analysis we collated 55 326 country-specific, cohort-specific, year-specific, vaccine-specific, and dosespecific observations of routine childhood vaccination coverage between 1980 and 2019. Using spatiotemporal Gaussian process regression, we produced location-specific and year-specific estimates of 11 routine childhood vaccine coverage indicators for 204 countries and territories from 1980 to 2019, adjusting for biases in countryreported data and reflecting reported stockouts and supply disruptions. We analysed global and regional trends in coverage and numbers of zero-dose children (defined as those who never received a diphtheria-tetanus-pertussis [DTP] vaccine dose), progress towards GVAP targets, and the relationship between vaccine coverage and sociodemographic development. Findings By 2019, global coverage of third-dose DTP (DTP3; 81.6% [95% uncertainty interval 80.4-82 .7]) more than doubled from levels estimated in 1980 (39.9% [37.5-42.1]), as did global coverage of the first-dose measles-containing vaccine (MCV1; from 38.5% [35.4-41.3] in 1980 to 83.6% [82.3-84.8] in 2019). Third- dose polio vaccine (Pol3) coverage also increased, from 42.6% (41.4-44.1) in 1980 to 79.8% (78.4-81.1) in 2019, and global coverage of newer vaccines increased rapidly between 2000 and 2019. The global number of zero-dose children fell by nearly 75% between 1980 and 2019, from 56.8 million (52.6-60. 9) to 14.5 million (13.4-15.9). However, over the past decade, global vaccine coverage broadly plateaued; 94 countries and territories recorded decreasing DTP3 coverage since 2010. Only 11 countries and territories were estimated to have reached the national GVAP target of at least 90% coverage for all assessed vaccines in 2019. Interpretation After achieving large gains in childhood vaccine coverage worldwide, in much of the world this progress was stalled or reversed from 2010 to 2019. These findings underscore the importance of revisiting routine immunisation strategies and programmatic approaches, recentring service delivery around equity and underserved populations. Strengthening vaccine data and monitoring systems is crucial to these pursuits, now and through to 2030, to ensure that all children have access to, and can benefit from, lifesaving vaccines. Copyright (C) 2021 The Author(s). Published by Elsevier Ltd.Peer reviewe

    How Increase of Bilirubin in Blood Impacts Cooking Likeness?

    Full text link
    Bilirubin is yellow-brownish pigment formed by breaking of dead red blood cells. And its normal value is 200-250 mg in adults. But if it is accumulate in blood flow, then it lead to neurological disorder. And for the checking impact of bilirubin on cooking likeness, and for this purpose 15 males and 65 females were participated. And their urine test is performed according to instructor. And by statistical analysis it is resulted that males who like have more bilirubin in their urine. And thus have more chances of disease.</jats:p

    Knee Osteoarthritis Detection and Classification Using Autoencoders and Extreme Learning Machines

    No full text
    Background/Objectives: Knee osteoarthritis (KOA) is a prevalent disorder affecting both older adults and younger individuals, leading to compromised joint function and mobility. Early and accurate detection is critical for effective intervention, as treatment options become increasingly limited as the disease progresses. Traditional diagnostic methods rely heavily on the expertise of physicians and are susceptible to errors. The demand for utilizing deep learning models in order to automate and improve the accuracy of KOA image classification has been increasing. In this research, a unique deep learning model is presented that employs autoencoders as the primary mechanism for feature extraction, providing a robust solution for KOA classification. Methods: The proposed model differentiates between KOA-positive and KOA-negative images and categorizes the disease into its primary severity levels. Levels of severity range from &ldquo;healthy knees&rdquo; (0) to &ldquo;severe KOA&rdquo; (4). Symptoms range from typical joint structures to significant joint damage, such as bone spur growth, joint space narrowing, and bone deformation. Two experiments were conducted using different datasets to validate the efficacy of the proposed model. Results: The first experiment used the autoencoder for feature extraction and classification, which reported an accuracy of 96.68%. Another experiment using autoencoders for feature extraction and Extreme Learning Machines for actual classification resulted in an even higher accuracy value of 98.6%. To test the generalizability of the Knee-DNS system, we utilized the Butterfly iQ+ IoT device for image acquisition and Google Colab&rsquo;s cloud computing services for data processing. Conclusions: This work represents a pioneering application of autoencoder-based deep learning models in the domain of KOA classification, achieving remarkable accuracy and robustness

    CAD-Skin: A Hybrid Convolutional Neural Network&ndash;Autoencoder Framework for Precise Detection and Classification of Skin Lesions and Cancer

    No full text
    Skin cancer is a class of disorder defined by the growth of abnormal cells on the body. Accurately identifying and diagnosing skin lesions is quite difficult because skin malignancies share many common characteristics and a wide range of morphologies. To face this challenge, deep learning algorithms have been proposed. Deep learning algorithms have shown diagnostic efficacy comparable to dermatologists in the discipline of images-based skin lesion diagnosis in recent research articles. This work proposes a novel deep learning algorithm to detect skin cancer. The proposed CAD-Skin system detects and classifies skin lesions using deep convolutional neural networks and autoencoders to improve the classification efficiency of skin cancer. The CAD-Skin system was designed and developed by the use of the modern preprocessing approach, which is a combination of multi-scale retinex, gamma correction, unsharp masking, and contrast-limited adaptive histogram equalization. In this work, we have implemented a data augmentation strategy to deal with unbalanced datasets. This step improves the model&rsquo;s resilience to different pigmented skin conditions and avoids overfitting. Additionally, a Quantum Support Vector Machine (QSVM) algorithm is integrated for final-stage classification. Our proposed CAD-Skin enhances category recognition for different skin disease severities, including actinic keratosis, malignant melanoma, and other skin cancers. The proposed system was tested using the PAD-UFES-20-Modified, ISIC-2018, and ISIC-2019 datasets. The system reached accuracy rates of 98%, 99%, and 99%, consecutively, which is higher than state-of-the-art work in the literature. The minimum accuracy achieved for certain skin disorder diseases reached 97.43%. Our research study demonstrates that the proposed CAD-Skin provides precise diagnosis and timely detection of skin abnormalities, diversifying options for doctors and enhancing patient satisfaction during medical practice

    CAD-EYE: An Automated System for Multi-Eye Disease Classification Using Feature Fusion with Deep Learning Models and Fluorescence Imaging for Enhanced Interpretability

    No full text
    Background: Diabetic retinopathy, hypertensive retinopathy, glaucoma, and contrast-related eye diseases are well-recognized conditions resulting from high blood pressure, rising blood glucose, and elevated eye pressure. Later-stage symptoms usually include patches of cotton wool, restricted veins in the optic nerve, and buildup of blood in the optic nerve. Severe consequences include damage of the visual nerve, and retinal artery obstruction, and possible blindness may result from these conditions. An early illness diagnosis is made easier by the use of deep learning models and artificial intelligence (AI). Objectives: This study introduces a novel methodology called CAD-EYE for classifying diabetic retinopathy, hypertensive retinopathy, glaucoma, and contrast-related eye issues. Methods: The proposed system combines the features extracted using two deep learning (DL) models (MobileNet and EfficientNet) using feature fusion to increase the diagnostic system efficiency. The system uses fluorescence imaging for increasing accuracy as an image processing algorithm. The algorithm is added to increase the interpretability and explainability of the CAD-EYE system. This algorithm was not used in such an application in the previous literature to the best of the authors’ knowledge. The study utilizes datasets sourced from reputable internet platforms to train the proposed system. Results: The system was trained on 65,871 fundus images from the collected datasets, achieving a 98% classification accuracy. A comparative analysis demonstrates that CAD-EYE surpasses cutting-edge models such as ResNet, GoogLeNet, VGGNet, InceptionV3, and Xception in terms of classification accuracy. A state-of-the-art comparison shows the superior performance of the model against previous work in the literature. Conclusions: These findings support the usefulness of CAD-EYE as a diagnosis tool that can help medical professionals diagnose an eye disease. However, this tool will not be replacing optometrists

    Photoplethysmography Based Detection of Social Stress

    No full text

    DR-NASNet: Automated System to Detect and Classify Diabetic Retinopathy Severity Using Improved Pretrained NASNet Model

    No full text
    Diabetes is a widely spread disease that significantly affects people’s lives. The leading cause is uncontrolled levels of blood glucose, which develop eye defects over time, including Diabetic Retinopathy (DR), which results in severe visual loss. The primary factor causing blindness is considered to be DR in diabetic patients. DR treatment tries to control the disease’s severity, as it is irreversible. The primary goal of this effort is to create a reliable method for automatically detecting the severity of DR. This paper proposes a new automated system (DR-NASNet) to detect and classify DR severity using an improved pretrained NASNet Model. To develop the DR-NASNet system, we first utilized a preprocessing technique that takes advantage of Ben Graham and CLAHE to lessen noise, emphasize lesions, and ultimately improve DR classification performance. Taking into account the imbalance between classes in the dataset, data augmentation procedures were conducted to control overfitting. Next, we have integrated dense blocks into the NASNet architecture to improve the effectiveness of classification results for five severity levels of DR. In practice, the DR-NASNet model achieves state-of-the-art results with a smaller model size and lower complexity. To test the performance of the DR-NASNet system, a combination of various datasets is used in this paper. To learn effective features from DR images, we used a pretrained model on the dataset. The last step is to put the image into one of five categories: No DR, Mild, Moderate, Proliferate, or Severe. To carry this out, the classifier layer of a linear SVM with a linear activation function must be added. The DR-NASNet system was tested using six different experiments. The system achieves 96.05% accuracy with the challenging DR dataset. The results and comparisons demonstrate that the DR-NASNet system improves a model’s performance and learning ability. As a result, the DR-NASNet system provides assistance to ophthalmologists by describing an effective system for classifying early-stage levels of DR

    Potential Effects of Biochar Application for Improving Wheat (Triticum aestivum L.) Growth and Soil Biochemical Properties under Drought Stress Conditions

    No full text
    Different soil amendments are applied to improve soil properties and to achieve higher crop yield under drought conditions. The objective of the study was to investigate the role of biochar for the improvement of wheat (Triticum aestivum L.) growth and soil biochemical properties under drought conditions. A pot experiment with a completely randomized design was arranged with four replications in a wire house. Drought was imposed on two critical growth stages (tillering and grain filling) and biochar was applied to the soil 10 days before sowing at two different rates (28 g kg−1 and 38 g kg−1). Soil samples were collected to determine the soil properties including soil respiration and enzymatic parameters after crop harvesting. Results showed that water stress negatively affects all biochemical properties of the soil, while biochar amendments positively improved these properties. Application of biochar at 38 g kg−1 provided significantly higher mineral nutrients, Bray P (18.72%), exchangeable-K (7.44%), soil carbon (11.86%), nitrogen mineralization (16.35%), and soil respiration (6.37%) as a result of increased microbial activities in comparison with the 28 g kg−1 rate.</jats:p
    corecore