46 research outputs found

    Learning Opposites Using Neural Networks

    Full text link
    Many research works have successfully extended algorithms such as evolutionary algorithms, reinforcement agents and neural networks using "opposition-based learning" (OBL). Two types of the "opposites" have been defined in the literature, namely \textit{type-I} and \textit{type-II}. The former are linear in nature and applicable to the variable space, hence easy to calculate. On the other hand, type-II opposites capture the "oppositeness" in the output space. In fact, type-I opposites are considered a special case of type-II opposites where inputs and outputs have a linear relationship. However, in many real-world problems, inputs and outputs do in fact exhibit a nonlinear relationship. Therefore, type-II opposites are expected to be better in capturing the sense of "opposition" in terms of the input-output relation. In the absence of any knowledge about the problem at hand, there seems to be no intuitive way to calculate the type-II opposites. In this paper, we introduce an approach to learn type-II opposites from the given inputs and their outputs using the artificial neural networks (ANNs). We first perform \emph{opposition mining} on the sample data, and then use the mined data to learn the relationship between input xx and its opposite x˘\breve{x}. We have validated our algorithm using various benchmark functions to compare it against an evolving fuzzy inference approach that has been recently introduced. The results show the better performance of a neural approach to learn the opposites. This will create new possibilities for integrating oppositional schemes within existing algorithms promising a potential increase in convergence speed and/or accuracy.Comment: To appear in proceedings of the 23rd International Conference on Pattern Recognition (ICPR 2016), Cancun, Mexico, December 201

    Classification and Retrieval of Digital Pathology Scans: A New Dataset

    Full text link
    In this paper, we introduce a new dataset, \textbf{Kimia Path24}, for image classification and retrieval in digital pathology. We use the whole scan images of 24 different tissue textures to generate 1,325 test patches of size 1000×\times1000 (0.5mm×\times0.5mm). Training data can be generated according to preferences of algorithm designer and can range from approximately 27,000 to over 50,000 patches if the preset parameters are adopted. We propose a compound patch-and-scan accuracy measurement that makes achieving high accuracies quite challenging. In addition, we set the benchmarking line by applying LBP, dictionary approach and convolutional neural nets (CNNs) and report their results. The highest accuracy was 41.80\% for CNN.Comment: Accepted for presentation at Workshop for Computer Vision for Microscopy Image Analysis (CVMI 2017) @ CVPR 2017, Honolulu, Hawai

    Computational Insights into Pharmacokinetic Profiling of Amygdalin: An In-Silico Study

    Get PDF
    Amygdalin is a naturally occurring cyanogenic glycoside which has been used as an alternative anti-cancer agent despite controversies surrounding its efficacy and safety. This study utilized computational approaches to investigate the absorption, distribution, metabolism, excretion and toxicity (ADMET) properties of amygdalin based on its molecular structure. Amygdalin was modeled in ChemBio3D and submitted to SwissADME and admetSAR servers for ADMET parameter prediction. The in-silico simulations indicated suboptimal pharmacological properties for amygdalin, including low lipophilicity, poor bioavailability, minimal blood-brain barrier permeability and non-compliance with drug-likeness criteria. Additional pharmacokinetic modeling through Simcyp suggested rapid clearance and short half-life after intravenous administration.While toxicity was predicted to be low at regular dosages, the overall pharmacological limitations may pose challenges for amygdalin’s efficacy as an anti-cancer therapy. The computational findings provide comprehensive insights into amygdalin’s drug-like behavior and can inform future in vitro/in vivo investigations on this naturally derived compound

    Targeted Greybox Fuzzing with Static Lookahead Analysis

    Full text link
    Automatic test generation typically aims to generate inputs that explore new paths in the program under test in order to find bugs. Existing work has, therefore, focused on guiding the exploration toward program parts that are more likely to contain bugs by using an offline static analysis. In this paper, we introduce a novel technique for targeted greybox fuzzing using an online static analysis that guides the fuzzer toward a set of target locations, for instance, located in recently modified parts of the program. This is achieved by first semantically analyzing each program path that is explored by an input in the fuzzer's test suite. The results of this analysis are then used to control the fuzzer's specialized power schedule, which determines how often to fuzz inputs from the test suite. We implemented our technique by extending a state-of-the-art, industrial fuzzer for Ethereum smart contracts and evaluate its effectiveness on 27 real-world benchmarks. Using an online analysis is particularly suitable for the domain of smart contracts since it does not require any code instrumentation---instrumentation to contracts changes their semantics. Our experiments show that targeted fuzzing significantly outperforms standard greybox fuzzing for reaching 83% of the challenging target locations (up to 14x of median speed-up)

    A Powerful Paradigm for Cardiovascular Risk Stratification Using Multiclass, Multi-Label, and Ensemble-Based Machine Learning Paradigms: A Narrative Review

    Get PDF
    Background and Motivation: Cardiovascular disease (CVD) causes the highest mortality globally. With escalating healthcare costs, early non-invasive CVD risk assessment is vital. Conventional methods have shown poor performance compared to more recent and fast-evolving Artificial Intelligence (AI) methods. The proposed study reviews the three most recent paradigms for CVD risk assessment, namely multiclass, multi-label, and ensemble-based methods in (i) office-based and (ii) stress-test laboratories. Methods: A total of 265 CVD-based studies were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) model. Due to its popularity and recent development, the study analyzed the above three paradigms using machine learning (ML) frameworks. We review comprehensively these three methods using attributes, such as architecture, applications, pro-and-cons, scientific validation, clinical evaluation, and AI risk-of-bias (RoB) in the CVD framework. These ML techniques were then extended under mobile and cloud-based infrastructure. Findings: Most popular biomarkers used were office-based, laboratory-based, image-based phenotypes, and medication usage. Surrogate carotid scanning for coronary artery risk prediction had shown promising results. Ground truth (GT) selection for AI-based training along with scientific and clinical validation is very important for CVD stratification to avoid RoB. It was observed that the most popular classification paradigm is multiclass followed by the ensemble, and multi-label. The use of deep learning techniques in CVD risk stratification is in a very early stage of development. Mobile and cloud-based AI technologies are more likely to be the future. Conclusions: AI-based methods for CVD risk assessment are most promising and successful. Choice of GT is most vital in AI-based models to prevent the RoB. The amalgamation of image-based strategies with conventional risk factors provides the highest stability when using the three CVD paradigms in non-cloud and cloud-based frameworks

    The kidney failure risk equation:evaluation of novel input variables including eGFR estimated using the CKD-EPI 2021 equation in 59 cohorts

    Get PDF
    SIGNIFICANCE STATEMENT: The kidney failure risk equation (KFRE) uses age, sex, GFR, and urine albumin-to-creatinine ratio (ACR) to predict 2- and 5-year risk of kidney failure in populations with eGFR <60 ml/min per 1.73 m 2 . However, the CKD-EPI 2021 creatinine equation for eGFR is now recommended for use but has not been fully tested in the context of KFRE. In 59 cohorts comprising 312,424 patients with CKD, the authors assessed the predictive performance and calibration associated with the use of the CKD-EPI 2021 equation and whether additional variables and accounting for the competing risk of death improves the KFRE's performance. The KFRE generally performed well using the CKD-EPI 2021 eGFR in populations with eGFR <45 ml/min per 1.73 m 2 and was not improved by adding the 2-year prior eGFR slope and cardiovascular comorbidities. BACKGROUND: The kidney failure risk equation (KFRE) uses age, sex, GFR, and urine albumin-to-creatinine ratio (ACR) to predict kidney failure risk in people with GFR <60 ml/min per 1.73 m 2 . METHODS: Using 59 cohorts with 312,424 patients with CKD, we tested several modifications to the KFRE for their potential to improve the KFRE: using the CKD-EPI 2021 creatinine equation for eGFR, substituting 1-year average ACR for single-measure ACR and 1-year average eGFR in participants with high eGFR variability, and adding 2-year prior eGFR slope and cardiovascular comorbidities. We also assessed calibration of the KFRE in subgroups of eGFR and age before and after accounting for the competing risk of death. RESULTS: The KFRE remained accurate and well calibrated overall using the CKD-EPI 2021 eGFR equation. The other modifications did not improve KFRE performance. In subgroups of eGFR 45-59 ml/min per 1.73 m 2 and in older adults using the 5-year time horizon, the KFRE demonstrated systematic underprediction and overprediction, respectively. We developed and tested a new model with a spline term in eGFR and incorporating the competing risk of mortality, resulting in more accurate calibration in those specific subgroups but not overall. CONCLUSIONS: The original KFRE is generally accurate for eGFR <45 ml/min per 1.73 m 2 when using the CKD-EPI 2021 equation. Incorporating competing risk methodology and splines for eGFR may improve calibration in low-risk settings with longer time horizons. Including historical averages, eGFR slopes, or a competing risk design did not meaningfully alter KFRE performance in most circumstances

    Cardiovascular/Stroke Risk Stratification in Diabetic Foot Infection Patients Using Deep Learning-Based Artificial Intelligence: An Investigative Study

    Get PDF
    A diabetic foot infection (DFI) is among the most serious, incurable, and costly to treat conditions. The presence of a DFI renders machine learning (ML) systems extremely nonlinear, posing difficulties in CVD/stroke risk stratification. In addition, there is a limited number of well-explained ML paradigms due to comorbidity, sample size limits, and weak scientific and clinical validation methodologies. Deep neural networks (DNN) are potent machines for learning that generalize nonlinear situations. The objective of this article is to propose a novel investigation of deep learning (DL) solutions for predicting CVD/stroke risk in DFI patients. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) search strategy was used for the selection of 207 studies. We hypothesize that a DFI is responsible for increased morbidity and mortality due to the worsening of atherosclerotic disease and affecting coronary artery disease (CAD). Since surrogate biomarkers for CAD, such as carotid artery disease, can be used for monitoring CVD, we can thus use a DL-based model, namely, Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNN) for CVD/stroke risk prediction in DFI patients, which combines covariates such as office and laboratory-based biomarkers, carotid ultrasound image phenotype (CUSIP) lesions, along with the DFI severity. We confirmed the viability of CVD/stroke risk stratification in the DFI patients. Strong designs were found in the research of the DL architectures for CVD/stroke risk stratification. Finally, we analyzed the AI bias and proposed strategies for the early diagnosis of CVD/stroke in DFI patients. Since DFI patients have an aggressive atherosclerotic disease, leading to prominent CVD/stroke risk, we, therefore, conclude that the DL paradigm is very effective for predicting the risk of CVD/stroke in DFI patients

    Impact of COVID-19 on cardiovascular testing in the United States versus the rest of the world

    Get PDF
    Objectives: This study sought to quantify and compare the decline in volumes of cardiovascular procedures between the United States and non-US institutions during the early phase of the coronavirus disease-2019 (COVID-19) pandemic. Background: The COVID-19 pandemic has disrupted the care of many non-COVID-19 illnesses. Reductions in diagnostic cardiovascular testing around the world have led to concerns over the implications of reduced testing for cardiovascular disease (CVD) morbidity and mortality. Methods: Data were submitted to the INCAPS-COVID (International Atomic Energy Agency Non-Invasive Cardiology Protocols Study of COVID-19), a multinational registry comprising 909 institutions in 108 countries (including 155 facilities in 40 U.S. states), assessing the impact of the COVID-19 pandemic on volumes of diagnostic cardiovascular procedures. Data were obtained for April 2020 and compared with volumes of baseline procedures from March 2019. We compared laboratory characteristics, practices, and procedure volumes between U.S. and non-U.S. facilities and between U.S. geographic regions and identified factors associated with volume reduction in the United States. Results: Reductions in the volumes of procedures in the United States were similar to those in non-U.S. facilities (68% vs. 63%, respectively; p = 0.237), although U.S. facilities reported greater reductions in invasive coronary angiography (69% vs. 53%, respectively; p < 0.001). Significantly more U.S. facilities reported increased use of telehealth and patient screening measures than non-U.S. facilities, such as temperature checks, symptom screenings, and COVID-19 testing. Reductions in volumes of procedures differed between U.S. regions, with larger declines observed in the Northeast (76%) and Midwest (74%) than in the South (62%) and West (44%). Prevalence of COVID-19, staff redeployments, outpatient centers, and urban centers were associated with greater reductions in volume in U.S. facilities in a multivariable analysis. Conclusions: We observed marked reductions in U.S. cardiovascular testing in the early phase of the pandemic and significant variability between U.S. regions. The association between reductions of volumes and COVID-19 prevalence in the United States highlighted the need for proactive efforts to maintain access to cardiovascular testing in areas most affected by outbreaks of COVID-19 infection
    corecore