65 research outputs found

    Block-Chain-Based Vaccine Volunteer Records Secure Storage and Service Structure

    Get PDF
    Accurate and complete vaccine volunteer’s data are one valuable asset for clinical research institutions. Privacy protection and the safe storage of vaccine volunteer’s data are vital concerns during clinical trial services. The advent of block-chain technology fetches an innovative idea to solve this problem. As a hash chain with the features of decentralization, authentication, and resistibility, blockchain-based technology can be used to safely store vaccine volunteer clinical trial data. In this paper, we proposed a safe storage method to control volunteer personal /clinical trial data based on blockchain with storing on cloud. Also, a service structure for sharing data of volunteer’s vaccine clinical trials is defined. Further, volunteer blockchain features are defined and examined. The projected storage and distribution method is independent of any third person and no single person has the complete influence to disturb the processing.

    Planarity in cubic intuitionistic graphs and their application to control air traffic on a runway

    Get PDF
    Fuzzy modeling plays a pivotal role in various fields, including science, engineering, and medicine. In comparison to conventional models, fuzzy models offer enhanced accuracy, adaptability, and resemblance to real-world systems and help researchers to always make the best choice in complex problems. A type of fuzzy graph that is widely used in medical and psychological sciences is the cubic intuitionistic fuzzy graph, which plays an important role in various fields such as computer science, psychology, medicine, and political sciences. It is also used to find effective people in an organization or social institution. In this research endeavor, we embark upon elucidating the innovative notion of a cubic intuitionistic planar graph, delving into its intricate properties and attributes. Additionally, we unveil the novel concept of a cubic intuitionistic dual graph, thus enriching the realm of graph theory with further profundity. Furthermore, our exploration encompasses the elucidation of other pertinent terminologies, such as cubic intuitionistic multi-graphs, along with the categorization of edges into the distinct classifications of strong and weak edges. Moreover, we discern the concept of the degree of planarity within the context of CIPG and unveil the notion of strong and weak faces. Additionally, we delve into the construction of cubic intuitionistic dual graphs, which can be realized in cases where the initial graph is planar or possesses a degree of planarity ≥0.67. Notably, we furnish the exposition with a comprehensive discussion on noteworthy findings and substantial results pertaining to these captivating topics, contributing valuable insights on the field of graph theory. Last, we shall endeavor to exemplify the practical relevance and importance of our research by presenting an illuminating real-world application, thus demonstrating the tangible impact and significance of our endeavors in this research article

    Design and baseline characteristics of the finerenone in reducing cardiovascular mortality and morbidity in diabetic kidney disease trial

    Get PDF
    Background: Among people with diabetes, those with kidney disease have exceptionally high rates of cardiovascular (CV) morbidity and mortality and progression of their underlying kidney disease. Finerenone is a novel, nonsteroidal, selective mineralocorticoid receptor antagonist that has shown to reduce albuminuria in type 2 diabetes (T2D) patients with chronic kidney disease (CKD) while revealing only a low risk of hyperkalemia. However, the effect of finerenone on CV and renal outcomes has not yet been investigated in long-term trials. Patients and Methods: The Finerenone in Reducing CV Mortality and Morbidity in Diabetic Kidney Disease (FIGARO-DKD) trial aims to assess the efficacy and safety of finerenone compared to placebo at reducing clinically important CV and renal outcomes in T2D patients with CKD. FIGARO-DKD is a randomized, double-blind, placebo-controlled, parallel-group, event-driven trial running in 47 countries with an expected duration of approximately 6 years. FIGARO-DKD randomized 7,437 patients with an estimated glomerular filtration rate >= 25 mL/min/1.73 m(2) and albuminuria (urinary albumin-to-creatinine ratio >= 30 to <= 5,000 mg/g). The study has at least 90% power to detect a 20% reduction in the risk of the primary outcome (overall two-sided significance level alpha = 0.05), the composite of time to first occurrence of CV death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for heart failure. Conclusions: FIGARO-DKD will determine whether an optimally treated cohort of T2D patients with CKD at high risk of CV and renal events will experience cardiorenal benefits with the addition of finerenone to their treatment regimen. Trial Registration: EudraCT number: 2015-000950-39; ClinicalTrials.gov identifier: NCT02545049

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    Pars-HAO: Hate Speech and Offensive Language Detection on Persian Social Media Using Ensemble Learning

    No full text
    As social networks continue to gain widespread popularity, an urgent requirement arises to automatically identify and detect offensive language and hate speech. While there is a wealth of research and datasets available for English in this domain, there is currently a scarcity of research and datasets focused on identifying hate speech and offensive language in Persian text. This article introduces a 3-class dataset named Pars-HAO, consisting of 8013 tweets, to fill the gap in existing research. We collected the dataset by combining comments from pages that are more exposed to hate speech and using a keyword-based approach. Three annotators then labeled the tweets. In this study, we employed a combination of the Convolutional Neural Network (CNN) model and two widely recognized machine learning models, namely Support Vector Machine (SVM) and Logistic Regression (LR), as a baseline. To improve the classification performance, we employed the Hard Voting ensemble learning technique. Experimental results on the Pars-HAO dataset demonstrated that the Hard voting ensemble learning technique yielded the best outcome, achieving a macro F1-score of 68.76%.</p

    Computational Modeling of Latent Heat Thermal Energy Storage in a Shell-Tube Unit: Using Neural Networks and Anisotropic Metal Foam

    No full text
    Latent heat storage in a shell-tube is a promising method to store excessive solar heat for later use. The shell-tube unit is filled with a phase change material PCM combined with a high porosity anisotropic copper metal foam (FM) of high thermal conductivity. The PCM-MF composite was modeled as an anisotropic porous medium. Then, a two-heat equation mathematical model, a local thermal non-equilibrium approach LTNE, was adopted to consider the effects of the difference between the thermal conductivities of the PCM and the copper foam. The Darcy–Brinkman–Forchheimer formulation was employed to model the natural convection circulations in the molten PCM region. The thermal conductivity and the permeability of the porous medium were a function of an anisotropic angle. The finite element method was employed to integrate the governing equations. A neural network model was successfully applied to learn the transient physical behavior of the storage unit. The neural network was trained using 4998 sample data. Then, the trained neural network was utilized to map the relationship between control parameters and melting behavior to optimize the storage design. The impact of the anisotropic angle and the inlet pressure of heat transfer fluid (HTF) was addressed on the thermal energy storage of the storage unit. Moreover, an artificial neural network was successfully utilized to learn the transient behavior of the thermal storage unit for various combinations of control parameters and map the storage behavior. The results showed that the anisotropy angle significantly affects the energy storage time. The melting volume fraction MVF was maximum for a zero anisotropic angle where the local thermal conductivity was maximum perpendicular to the heated tube. An optimum storage rate could be obtained for an anisotropic angle smaller than 45°. Compared to a uniform MF, utilizing an optimum anisotropic angle could reduce the melting time by about 7% without impacting the unit’s thermal energy storage capacity or adding weight

    Computational Study of Phase Change Heat Transfer and Latent Heat Energy Storage for Thermal Management of Electronic Components Using Neural Networks

    Get PDF
    The phase change heat transfer of nano-enhanced phase change materials (NePCMs) was addressed in a heatsink filled with copper metal foam fins. The NePCM was made of 1-Tetradecanol graphite nanoplatelets. The heatsink was an annulus contained where its outer surface was subject to a convective cooling of an external flow while its inner surface was exposed to a constant heat flux. The governing equations, including the momentum and heat transfer with phase change, were explained in a partial differential equation form and integrated using the finite element method. An artificial neural network was employed to map the relationship between the anisotropic angle and nanoparticles fractions with the melting volume fraction. The computational model data were used to successfully train the ANN. The trained ANN showed an R-value close to unity, indicating the high prediction accuracy of the neural network. Then, ANN was used to produce maps of melting fractions as a function of design parameters. The impact of the geometrical placement of metal foam fins and concentrations of the nanoparticles on the surface heat transfer was addressed. It was found that spreading the fins (large angles between the fins) could improve the cooling performance of the heatsink without increasing its weight. Moreover, the nanoparticles could reduce the thermal energy storage capacity of the heatsink since they do not contribute to heat transfer. In addition, since the nanoparticles generally increase the surface heat transfer, they could be beneficial only with 1.0% wt in the middle stages of the melting heat transfer

    A Linear Quadratic Regression-Based Synchronised Health Monitoring System (SHMS) for IoT Applications

    No full text
    In recent days, the IoT along with wireless sensor networks (WSNs), have been widely deployed for various healthcare applications. Nowadays, healthcare industries use electronic sensors to reduce human errors while analysing illness more accurately and effectively. This paper proposes an IoT-based health monitoring system to investigate body weight, temperature, blood pressure, respiration and heart rate, room temperature, humidity, and ambient light along with the synchronised clock model. The system is divided into two phases. In the first phase, the system compares the observed parameters. It generates advisory to parents or guardians through SMS or e-mails. This cost-effective and easy-to-deploy system provides timely intimation to the associated medical practitioner about the patient’s health and reduces the effort of the medical practitioner. The data collected using the proposed system were accurate. In the second phase, the proposed system was also synchronised using a linear quadratic regression clock synchronisation technique to maintain a high synchronisation between sensors and an alarm system. The observation made in this paper is that the synchronised technology improved the performance of the proposed health monitoring system by reducing the root mean square error to 0.379% and the R-square error by 0.71%

    Energy Efficient Consensus Approach of Blockchain for IoT Networks with Edge Computing

    No full text
    Blockchain technology is gaining a lot of attention in various fields, such as intellectual property, finance, smart agriculture, etc. The security features of blockchain have been widely used, integrated with artificial intelligence, Internet of Things (IoT), software defined networks (SDN), etc. The consensus mechanism of blockchain is its core and ultimately affects the performance of the blockchain. In the past few years, many consensus algorithms, such as proof of work (PoW), ripple, proof of stake (PoS), practical byzantine fault tolerance (PBFT), etc., have been designed to improve the performance of the blockchain. However, the high energy requirement, memory utilization, and processing time do not match with our actual desires. This paper proposes the consensus approach on the basis of PoW, where a single miner is selected for mining the task. The mining task is offloaded to the edge networking. The miner is selected on the basis of the digitization of the specifications of the respective machines. The proposed model makes the consensus approach more energy efficient, utilizes less memory, and less processing time. The improvement in energy consumption is approximately 21% and memory utilization is 24%. Efficiency in the block generation rate at the fixed time intervals of 20 min, 40 min, and 60 min was observed

    Shortcut Learning Explanations for Deep Natural Language Processing: A Survey on Dataset Biases

    No full text
    The introduction of pre-trained large language models (LLMs) has transformed NLP by fine-tuning task-specific datasets, enabling notable advancements in news classification, language translation, and sentiment analysis. This has revolutionized the field, driving remarkable breakthroughs and progress. However, the growing recognition of bias in textual data has emerged as a critical focus in the NLP community, revealing the inherent limitations of models trained on specific datasets. LLMs exploit these dataset biases and artifacts as expedient shortcuts for prediction. The reliance of LLMs on dataset bias and artifacts as shortcuts for prediction has hindered their generalizability and adversarial robustness. Addressing this issue is crucial to enhance the reliability and resilience of LLMs in various contexts. This survey provides a comprehensive overview of the rapidly growing body of research on shortcut learning in language models, classifying the research into four main areas: the factors of shortcut learning, the origin of bias, the detection methods of dataset biases, and understanding mitigation strategies to address data biases. The goal of this study is to offer a contextualized, in-depth look at the state of learning models, highlighting the major areas of attention and suggesting possible directions for further research
    corecore