13 research outputs found

    Queueing theory model of pentose phosphate pathway

    Get PDF
    Due to its role in maintaining the proper functioning of the cell, the pentose phosphate pathway (PPP) is one of the most important metabolic pathways. It is responsible for regulating the concentration of simple sugars and provides precursors for the synthesis of amino acids and nucleotides. In addition, it plays a critical role in maintaining an adequate level of NADPH, which is necessary for the cell to fight oxidative stress. These reasons prompted the authors to develop a computational model, based on queueing theory, capable of simulating changes in PPP metabolites’ concentrations. The model has been validated with empirical data from tumor cells. The obtained results prove the stability and accuracy of the model. By applying queueing theory, this model can be further expanded to include successive metabolic pathways. The use of the model may accelerate research on new drugs, reduce drug costs, and reduce the reliance on laboratory animals necessary for this type of research on which new methods are tested

    A Machine-Learning-Based Approach to Prediction of Biogeographic Ancestry within Europe

    Get PDF
    Data obtained with the use of massive parallel sequencing (MPS) can be valuable in population genetics studies. In particular, such data harbor the potential for distinguishing samples from different populations, especially from those coming from adjacent populations of common origin. Machine learning (ML) techniques seem to be especially well suited for analyzing large datasets obtained using MPS. The Slavic populations constitute about a third of the population of Europe and inhabit a large area of the continent, while being relatively closely related in population genetics terms. In this proof-of-concept study, various ML techniques were used to classify DNA samples from Slavic and non-Slavic individuals. The primary objective of this study was to empirically evaluate the feasibility of discerning the genetic provenance of individuals of Slavic descent who exhibit genetic similarity, with the overarching goal of categorizing DNA specimens derived from diverse Slavic population representatives. Raw sequencing data were pre-processed, to obtain a 1200 character-long binary vector. A total of three classifiers were used—Random Forest, Support Vector Machine (SVM), and XGBoost. The most-promising results were obtained using SVM with a linear kernel, with 99.9% accuracy and F1-scores of 0.9846–1.000 for all classes

    Standards of nasal provocation tests

    Get PDF
    The nasal allergen provocation test (NAPT) plays an important role in diagnosis of allergy. It is based on the natural reaction of the nasal mucosa after application of tested substances on their surface. Nasal challenges are divided into specific (allergenic) and non-specific. Typical clinical symptoms depend on the immunological reaction: sneezing, itching, rhinorrhea and nasal blockage. Eosinophilic mucosal infiltration is observed in the late phase reaction. Typically the result of nasal challenge should be assessed by the physician and patient. The immunological reaction, especially the level of particular allergenic mediators, could also be investigated. Some objective methods for final assessment of the results are recommended. There is some information about non-specific nasal provocation with aspirin and occupational diseases in the pape

    Zastosowanie konwolucyjnych sieci neuronowych do przetwarzania i interpretacji obrazĂłw

    No full text
    This article describes the application of Convolutional Neural Network in image processing and describes how it works. There are presented: network layers, types of activation functions, example of the AlexNet network architecture, the use of the loss function and the cross entropy method to calculate the loss during tests, L2 and Dropout methods used for weights regularization and optimization of the loss function using Stochastic Gradient Drop.ArtykuƂ ten opisuje zastosowanie Konwolucyjnych Sieci Neuronowych w przetwarzaniu obrazĂłw. W celu lepszego zrozumienia tematu opisano sposĂłb dziaƂania sieci. Przedstawiono sieci wielowarstwowe, rodzaje funkcji aktywacji, przykƂad architektury sieci AlexNet. W artykule skupiono się na opisaniu wykorzystania funkcji straty oraz metody entropii krzyĆŒowej do obliczenia straty w czasie testĂłw. Opisano rĂłwnieĆŒ sposoby normalizacji wag L2 i Dropout oraz optymalizację funkcji straty za pomocą Stochastycznego Spadku Gradientu

    Deep Learning Techniques in the Classification of ECG Signals Using R-Peak Detection Based on the PTB-XL Dataset

    No full text
    Deep Neural Networks (DNNs) are state-of-the-art machine learning algorithms, the application of which in electrocardiographic signals is gaining importance. So far, limited studies or optimizations using DNN can be found using ECG databases. To explore and achieve effective ECG recognition, this paper presents a convolutional neural network to perform the encoding of a single QRS complex with the addition of entropy-based features. This study aims to determine what combination of signal information provides the best result for classification purposes. The analyzed information included the raw ECG signal, entropy-based features computed from raw ECG signals, extracted QRS complexes, and entropy-based features computed from extracted QRS complexes. The tests were based on the classification of 2, 5, and 20 classes of heart diseases. The research was carried out on the data contained in a PTB-XL database. An innovative method of extracting QRS complexes based on the aggregation of results from established algorithms for multi-lead signals using the k-mean method, at the same time, was presented. The obtained results prove that adding entropy-based features and extracted QRS complexes to the raw signal is beneficial. Raw signals with entropy-based features but without extracted QRS complexes performed much worse

    ECG Signal Classification Using Deep Learning Techniques Based on the PTB-XL Dataset

    No full text
    The analysis and processing of ECG signals are a key approach in the diagnosis of cardiovascular diseases. The main field of work in this area is classification, which is increasingly supported by machine learning-based algorithms. In this work, a deep neural network was developed for the automatic classification of primary ECG signals. The research was carried out on the data contained in a PTB-XL database. Three neural network architectures were proposed: the first based on the convolutional network, the second on SincNet, and the third on the convolutional network, but with additional entropy-based features. The dataset was divided into training, validation, and test sets in proportions of 70%, 15%, and 15%, respectively. The studies were conducted for 2, 5, and 20 classes of disease entities. The convolutional network with entropy features obtained the best classification result. The convolutional network without entropy-based features obtained a slightly less successful result, but had the highest computational efficiency, due to the significantly lower number of neurons

    Study of the Few-Shot Learning for ECG Classification Based on the PTB-XL Dataset

    No full text
    The electrocardiogram (ECG) is considered a fundamental of cardiology. The ECG consists of P, QRS, and T waves. Information provided from the signal based on the intervals and amplitudes of these waves is associated with various heart diseases. The first step in isolating the features of an ECG begins with the accurate detection of the R-peaks in the QRS complex. The database was based on the PTB-XL database, and the signals from Lead I–XII were analyzed. This research focuses on determining the Few-Shot Learning (FSL) applicability for ECG signal proximity-based classification. The study was conducted by training Deep Convolutional Neural Networks to recognize 2, 5, and 20 different heart disease classes. The results of the FSL network were compared with the evaluation score of the neural network performing softmax-based classification. The neural network proposed for this task interprets a set of QRS complexes extracted from ECG signals. The FSL network proved to have higher accuracy in classifying healthy/sick patients ranging from 93.2% to 89.2% than the softmax-based classification network, which achieved 90.5–89.2% accuracy. The proposed network also achieved better results in classifying five different disease classes than softmax-based counterparts with an accuracy of 80.2–77.9% as opposed to 77.1% to 75.1%. In addition, the method of R-peaks labeling and QRS complexes extraction has been implemented. This procedure converts a 12-lead signal into a set of R waves by using the detection algorithms and the k-mean algorithm

    IoT Application of Transfer Learning in Hybrid Artificial Intelligence Systems for Acute Lymphoblastic Leukemia Classification

    No full text
    Acute lymphoblastic leukemia is the most common cancer in children, and its diagnosis mainly includes microscopic blood tests of the bone marrow. Therefore, there is a need for a correct classification of white blood cells. The approach developed in this article is based on an optimized and small IoT-friendly neural network architecture. The application of learning transfer in hybrid artificial intelligence systems is offered. The hybrid system consisted of a MobileNet v2 encoder pre-trained on the ImageNet dataset and machine learning algorithms performing the role of the head. These were the XGBoost, Random Forest, and Decision Tree algorithms. In this work, the average accuracy was over 90%, reaching 97.4%. This work proves that using hybrid artificial intelligence systems for tasks with a low computational complexity of the processing units demonstrates a high classification accuracy. The methods used in this study, confirmed by the promising results, can be an effective tool in diagnosing other blood diseases, facilitating the work of a network of medical institutions to carry out the correct treatment schedule

    The impact of the number of high temporal resolution water meters on the determinism of water consumption in a district metered area

    No full text
    Abstract Developments in data mining techniques have significantly influenced the progress of Intelligent Water Systems (IWSs). Learning about the hydraulic conditions enables the development of increasingly reliable predictive models of water consumption. The non-stationary, non-linear, and inherent stochasticity of water consumption data at the level of a single water meter means that the characteristics of its determinism remain impossible to observe and their burden of randomness creates interpretive difficulties. A deterministic model of water consumption was developed based on data from high temporal resolution water meters. Seven machine learning algorithms were used and compared to build predictive models. In addition, an attempt was made to estimate how many water meters data are needed for the model to bear the hallmarks of determinism. The most accurate model was obtained using Support Vector Regression (8.9%) and the determinism of the model was achieved using time series from eleven water meters of multi-family buildings

    Integrating glycolysis, citric acid cycle, pentose phosphate pathway, and fatty acid beta-oxidation into a single computational model

    No full text
    Abstract The metabolic network of a living cell is highly intricate and involves complex interactions between various pathways. In this study, we propose a computational model that integrates glycolysis, the pentose phosphate pathway (PPP), the fatty acids beta-oxidation, and the tricarboxylic acid cycle (TCA cycle) using queueing theory. The model utilizes literature data on metabolite concentrations and enzyme kinetic constants to calculate the probabilities of individual reactions occurring on a microscopic scale, which can be viewed as the reaction rates on a macroscopic scale. However, it should be noted that the model has some limitations, including not accounting for all the reactions in which the metabolites are involved. Therefore, a genetic algorithm (GA) was used to estimate the impact of these external processes. Despite these limitations, our model achieved high accuracy and stability, providing real-time observation of changes in metabolite concentrations. This type of model can help in better understanding the mechanisms of biochemical reactions in cells, which can ultimately contribute to the prevention and treatment of aging, cancer, metabolic diseases, and neurodegenerative disorders
    corecore