55 research outputs found

    Neural Networks based Smart e-Health Application for the Prediction of Tuberculosis using Serverless Computing.

    Get PDF
    The convergence of the Internet of Things (IoT) with e-health records is creating a new era of advancements in the diagnosis and treatment of disease, which is reshaping the modern landscape of healthcare. In this paper, we propose a neural networks-based smart e-health application for the prediction of Tuberculosis (TB) using serverless computing. The performance of various Convolution Neural Network (CNN) architectures using transfer learning is evaluated to prove that this technique holds promise for enhancing the capabilities of IoT and e-health systems in the future for predicting the manifestation of TB in the lungs. The work involves training, validating, and comparing Densenet-201, VGG-19, and Mobilenet-V3-Small architectures based on performance metrics such as test binary accuracy, test loss, intersection over union, precision, recall, and F1 score. The findings hint at the potential of integrating these advanced Machine Learning (ML) models within IoT and e-health frameworks, thereby paving the way for more comprehensive and data-driven approaches to enable smart healthcare. The best-performing model, VGG-19, is selected for different deployment strategies using server and serless-based environments. We used JMeter to measure the performance of the deployed model, including the average response rate, throughput, and error rate. This study provides valuable insights into the selection and deployment of ML models in healthcare, highlighting the advantages and challenges of different deployment options. Furthermore, it also allows future studies to integrate such models into IoT and e-health systems, which could enhance healthcare outcomes through more informed and timely treatments

    Unraveling the performance of dispersion-corrected functionals for the accurate description of weakly bound natural polyphenols

    Get PDF
    Long-range non-covalent interactions play a key role in the chemistry of natural polyphenols. We have previously proposed a description of supramolecular polyphenol complexes by the B3P86 density functional coupled with some corrections for dispersion. We couple here the B3P86 functional with the D3 correction for dispersion, assessing systematically the accuracy of the new B3P86-D3 model using for that the well-known S66, HB23, NCCE31, and S12L datasets for non-covalent interactions. Furthermore, the association energies of these complexes were carefully compared to those obtained by other dispersion-corrected functionals, such as B(3)LYP-D3, BP86-D3 or B3P86-NL. Finally, this set of models were also applied to a database composed of seven non-covalent polyphenol complexes of the most interest.FDM acknowledges financial support from the Swedish Research Council (Grant No. 621-2014-4646) and SNIC (Swedish National Infrastructure for Computing) for providing computer resources. The work in Limoges (IB and PT) is supported by the “Conseil Régional du Limousin”. PT gratefully acknowledges the support by the Operational Program Research and Development Fund (project CZ.1.05/2.1.00/03.0058 of the Ministry of Education, Youth and Sports of the Czech Republic). IB gratefully acknowledges financial support from “Association Djerbienne en France”

    A Dosimetric Comparison Study for Blood Irradiation Employing Different Medium and Algorithms in Clinical Linear Accelerator

    Get PDF
    Sarath S Nair,1 Jyothi Nagesh,1 Shambhavi C,2 Anshul Singh,1 Shirley Lewis,1 Umesh Velu,1 Deepika Chenna3 1Department of Radiation Oncology, Kasturba Medical College Manipal MAHE, Manipal, Karnataka, India; 2Department of Medical Radiation Physics Program, MCHP Manipal, MAHE, Manipal, Karanataka, India; 3Department of Immunohematology and Blood Transfusion, Kasturba Medical College Manipal MAHE, Manipal, Karnataka, IndiaCorrespondence: Jyothi Nagesh, Email [email protected]: To identify a suitable approach for blood irradiation other than the commonly used water medium and to study the impact of different algorithm dose computations.Methods: Water is the commonly used medium for blood irradiation. In this study computed tomography scans were taken with locally made blood irradiation phantoms other than water, by using air, rice powder and thermocole using parallel beam for 25 Gy. Plans were recalculated for different algorithms such as collapsed cone (CC), Monte Carlo (MC) and pencil beam (PB). The dose–volume parameters and measured doses were collected and analyzed for each medium and algorithm.Findings: The monitor unit (MU) for rice powder and water are close (2461± 57 and 2469± 61, respectively), with a maximum dose of 28.0± 1.8 and 28.0± 1.9 Gy. The PB algorithm resulted in lower monitor unit values regardless of the medium used, generating values of 2418, 2406, 2382, and 2362 for water, rice powder, air, and Thermocol, respectively. A significant increase in dose was observed irrespective of the medium used when the MC algorithm was employed, with a maximum of 30.26 Gy in rice powder; a smaller dose was used when the CC algorithm was employed, with 26.3 Gy in water medium. The average maximum doses of all groups were equal using the one-way Anova statistical test. Regarding the impact of field size, rice powder appears to have consistent doses across various field sizes, with slight increases as field size grows, which is similar to water.Novelty/Applications: While water is the conventional medium, this study highlights the potential benefits of rice powder, such as eliminating the risks associated with bubble formation and water spillage, which can lead to equipment malfunction and safety hazards. Although previous studies have explored rice powder as a bolus and tissue-equivalent material, this study uniquely applies this knowledge to blood irradiation, an area where rice powder has not been thoroughly investigated.Keywords: computed tomography, collapsed cone, Monte Carlo, pencil beam, monitor units, MU, treatment planning station

    Oral abstracts of the 21st International AIDS Conference 18-22 July 2016, Durban, South Africa

    Get PDF
    The rate at which HIV-1 infected individuals progress to AIDS is highly variable and impacted by T cell immunity. CD8 T cell inhibitory molecules are up-regulated in HIV-1 infection and associate with immune dysfunction. We evaluated participants (n=122) recruited to the SPARTAC randomised clinical trial to determine whether CD8 T cell exhaustion markers PD-1, Lag-3 and Tim-3 were associated with immune activation and disease progression.Expression of PD-1, Tim-3, Lag-3 and CD38 on CD8 T cells from the closest pre-therapy time-point to seroconversion was measured by flow cytometry, and correlated with surrogate markers of HIV-1 disease (HIV-1 plasma viral load (pVL) and CD4 T cell count) and the trial endpoint (time to CD4 count <350 cells/μl or initiation of antiretroviral therapy). To explore the functional significance of these markers, co-expression of Eomes, T-bet and CD39 was assessed.Expression of PD-1 on CD8 and CD38 CD8 T cells correlated with pVL and CD4 count at baseline, and predicted time to the trial endpoint. Lag-3 expression was associated with pVL but not CD4 count. For all exhaustion markers, expression of CD38 on CD8 T cells increased the strength of associations. In Cox models, progression to the trial endpoint was most marked for PD-1/CD38 co-expressing cells, with evidence for a stronger effect within 12 weeks from confirmed diagnosis of PHI. The effect of PD-1 and Lag-3 expression on CD8 T cells retained statistical significance in Cox proportional hazards models including antiretroviral therapy and CD4 count, but not pVL as co-variants.Expression of ‘exhaustion’ or ‘immune checkpoint’ markers in early HIV-1 infection is associated with clinical progression and is impacted by immune activation and the duration of infection. New markers to identify exhausted T cells and novel interventions to reverse exhaustion may inform the development of novel immunotherapeutic approaches

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified

    CloudAIBus: a testbed for AI based cloud computing environments

    No full text
    Smart resource allocation is essential for optimising cloud computing efficiency and utilisation, but it is also very challenging as traditional approaches often overprovision CPU resources, leading to financial inefficiencies. Recently developed Artificial Intelligence (AI) techniques have the potential to solve this problem efficiently; for example, deep learning models can accurately forecast how resources will be used, allowing for more efficient distribution of those resources. Despite these encouraging breakthroughs, researchers have not thoroughly investigated these AI models’ dynamic scaling potential. To address this gap, we developed a new testbed for an AI-driven cloud computing environment called CloudAIBus for effective resource allocation. CloudAIBus employs a deep learning model named DeepAR to provide a robust solution for forecasting CPU usage in order to make cost-effective resource allocation decisions. Furthermore, we implement the DeepAR model using Amazon SageMaker, a robust platform that provides the infrastructure for scalable and efficient training. We evaluated the performance of the DeepAR-based resource management approach (CloudAIBus) using Google Colab, and results show that the proposed approach offers better performance than baselines (LSTM and ARIMA-based resource management) in terms of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Mean Squared Error (MSE). The proposed approach cut the percentage of unused CPUs from 98.65 to 32.35% compared to the GWA-T-12 dataset. This showed that it was effective at reducing over-provisioning by making accurate predictions
    corecore