161 research outputs found

    Biochemical parameter estimation vs. benchmark functions: A comparative study of optimization performance and representation design

    Get PDF
    © 2019 Elsevier B.V. Computational Intelligence methods, which include Evolutionary Computation and Swarm Intelligence, can efficiently and effectively identify optimal solutions to complex optimization problems by exploiting the cooperative and competitive interplay among their individuals. The exploration and exploitation capabilities of these meta-heuristics are typically assessed by considering well-known suites of benchmark functions, specifically designed for numerical global optimization purposes. However, their performances could drastically change in the case of real-world optimization problems. In this paper, we investigate this issue by considering the Parameter Estimation (PE) of biochemical systems, a common computational problem in the field of Systems Biology. In order to evaluate the effectiveness of various meta-heuristics in solving the PE problem, we compare their performance by considering a set of benchmark functions and a set of synthetic biochemical models characterized by a search space with an increasing number of dimensions. Our results show that some state-of-the-art optimization methods – able to largely outperform the other meta-heuristics on benchmark functions – are characterized by considerably poor performances when applied to the PE problem. We also show that a limiting factor of these optimization methods concerns the representation of the solutions: indeed, by means of a simple semantic transformation, it is possible to turn these algorithms into competitive alternatives. We corroborate this finding by performing the PE of a model of metabolic pathways in red blood cells. Overall, in this work we state that classic benchmark functions cannot be fully representative of all the features that make real-world optimization problems hard to solve. This is the case, in particular, of the PE of biochemical systems. We also show that optimization problems must be carefully analyzed to select an appropriate representation, in order to actually obtain the performance promised by benchmark results

    MedGA: A novel evolutionary method for image enhancement in medical imaging systems

    Get PDF
    Medical imaging systems often require the application of image enhancement techniques to help physicians in anomaly/abnormality detection and diagnosis, as well as to improve the quality of images that undergo automated image processing. In this work we introduce MedGA, a novel image enhancement method based on Genetic Algorithms that is able to improve the appearance and the visual quality of images characterized by a bimodal gray level intensity histogram, by strengthening their two underlying sub-distributions. MedGA can be exploited as a pre-processing step for the enhancement of images with a nearly bimodal histogram distribution, to improve the results achieved by downstream image processing techniques. As a case study, we use MedGA as a clinical expert system for contrast-enhanced Magnetic Resonance image analysis, considering Magnetic Resonance guided Focused Ultrasound Surgery for uterine fibroids. The performances of MedGA are quantitatively evaluated by means of various image enhancement metrics, and compared against the conventional state-of-the-art image enhancement techniques, namely, histogram equalization, bi-histogram equalization, encoding and decoding Gamma transformations, and sigmoid transformations. We show that MedGA considerably outperforms the other approaches in terms of signal and perceived image quality, while preserving the input mean brightness. MedGA may have a significant impact in real healthcare environments, representing an intelligent solution for Clinical Decision Support Systems in radiology practice for image enhancement, to visually assist physicians during their interactive decision-making tasks, as well as for the improvement of downstream automated processing pipelines in clinically useful measurements

    MSR32 COVID-19 Beds’ Occupancy and Hospital Complaints: A Predictive Model

    Get PDF
    Objectives COVID-19 pandemic limited the number of patients that could be promptly and adequately taken in charge. The proposed research aims at predicting the number of patients requiring any type of hospitalizations, considering not only patients affected by COVID-19, but also other severe viral diseases, including untreated chronic and frail patients, and also oncological ones, to estimate potential hospital lawsuits and complaints. Methods An unsupervised learning approach of artificial neural network’s called Self-Organizing Maps (SOM), grounding on the prediction of the existence of specific clusters and useful to predict hospital behavioral changes, has been designed to forecast the hospital beds’ occupancy, using pre and post COVID-19 time-series, and supporting the prompt prediction of litigations and potential lawsuits, so that hospital managers and public institutions could perform an impacts’ analysis to decide whether to invest resources to increase or allocate differentially hospital beds and humans capacity. Data came from the UK National Health Service (NHS) statistic and digital portals, concerning a 4-year time horizon, related to 2 pre and 2 post COVID-19 years. Results Clusters revealed two principal behaviors in selecting the resources allocation. In case of increase of non-COVID hospitalized patients, a reduction in the number of complaints (-55%) emerged. A higher number of complaints was registered (+17%) against a considerable reduction in the number of beds occupied (-26%). Based on the above, the management of hospital beds is a crucial factor which can influence the complaints trend. Conclusions The model could significantly support in the management of hospital capacity, helping decision-makers in taking rational decisions under conditions of uncertainty. In addition, this model is highly replicable also in the estimation of current hospital beds, healthcare professionals, equipment, and other resources, extremely scarce during emergency or pandemic crises, being able to be adapted for different local and national settings

    Computational Intelligence for Life Sciences

    Get PDF
    Computational Intelligence (CI) is a computer science discipline encompassing the theory, design, development and application of biologically and linguistically derived computational paradigms. Traditionally, the main elements of CI are Evolutionary Computation, Swarm Intelligence, Fuzzy Logic, and Neural Networks. CI aims at proposing new algorithms able to solve complex computational problems by taking inspiration from natural phenomena. In an intriguing turn of events, these nature-inspired methods have been widely adopted to investigate a plethora of problems related to nature itself. In this paper we present a variety of CI methods applied to three problems in life sciences, highlighting their effectiveness: we describe how protein folding can be faced by exploiting Genetic Programming, the inference of haplotypes can be tackled using Genetic Algorithms, and the estimation of biochemical kinetic parameters can be performed by means of Swarm Intelligence. We show that CI methods can generate very high quality solutions, providing a sound methodology to solve complex optimization problems in life sciences

    USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets

    Get PDF
    Prostate cancer is the most common malignant tumors in men but prostate Magnetic Resonance Imaging (MRI) analysis remains challenging. Besides whole prostate gland segmentation, the capability to differentiate between the blurry boundary of the Central Gland (CG) and Peripheral Zone (PZ) can lead to differential diagnosis, since tumor's frequency and severity differ in these regions. To tackle the prostate zonal segmentation task, we propose a novel Convolutional Neural Network (CNN), called USE-Net, which incorporates Squeeze-and-Excitation (SE) blocks into U-Net. Especially, the SE blocks are added after every Encoder (Enc USE-Net) or Encoder-Decoder block (Enc-Dec USE-Net). This study evaluates the generalization ability of CNN-based architectures on three T2-weighted MRI datasets, each one consisting of a different number of patients and heterogeneous image characteristics, collected by different institutions. The following mixed scheme is used for training/testing: (i) training on either each individual dataset or multiple prostate MRI datasets and (ii) testing on all three datasets with all possible training/testing combinations. USE-Net is compared against three state-of-the-art CNN-based architectures (i.e., U-Net, pix2pix, and Mixed-Scale Dense Network), along with a semi-automatic continuous max-flow model. The results show that training on the union of the datasets generally outperforms training on each dataset separately, allowing for both intra-/cross-dataset generalization. Enc USE-Net shows good overall generalization under any training condition, while Enc-Dec USE-Net remarkably outperforms the other methods when trained on all datasets. These findings reveal that the SE blocks' adaptive feature recalibration provides excellent cross-dataset generalization when testing is performed on samples of the datasets used during training.Comment: 44 pages, 6 figures, Accepted to Neurocomputing, Co-first authors: Leonardo Rundo and Changhee Ha

    Piecewise polynomial approximation of probability density functions with application to uncertainty quantification for stochastic PDEs

    Full text link
    The probability density function (PDF) associated with a given set of samples is approximated by a piecewise-linear polynomial constructed with respect to a binning of the sample space. The kernel functions are a compactly supported basis for the space of such polynomials, i.e. finite element hat functions, that are centered at the bin nodes rather than at the samples, as is the case for the standard kernel density estimation approach. This feature naturally provides an approximation that is scalable with respect to the sample size. On the other hand, unlike other strategies that use a finite element approach, the proposed approximation does not require the solution of a linear system. In addition, a simple rule that relates the bin size to the sample size eliminates the need for bandwidth selection procedures. The proposed density estimator has unitary integral, does not require a constraint to enforce positivity, and is consistent. The proposed approach is validated through numerical examples in which samples are drawn from known PDFs. The approach is also used to determine approximations of (unknown) PDFs associated with outputs of interest that depend on the solution of a stochastic partial differential equation

    A CUDA-powered method for the feature extraction and unsupervised analysis of medical images

    Get PDF
    Funder: Università degli Studi di Milano - BicoccaAbstractImage texture extraction and analysis are fundamental steps in computer vision. In particular, considering the biomedical field, quantitative imaging methods are increasingly gaining importance because they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment. In this context, radiomic approaches are fostering large-scale studies that can have a significant impact in the clinical practice. In this work, we present a novel method, called CHASM (Cuda, HAralick &amp; SoM), which is accelerated on the graphics processing unit (GPU) for quantitative imaging analyses based on Haralick features and on the self-organizing map (SOM). The Haralick features extraction step relies upon the gray-level co-occurrence matrix, which is computationally burdensome on medical images characterized by a high bit depth. The downstream analyses exploit the SOM with the goal of identifying the underlying clusters of pixels in an unsupervised manner. CHASM is conceived to leverage the parallel computation capabilities of modern GPUs. Analyzing ovarian cancer computed tomography images, CHASM achieved up to ∼19.5×\sim 19.5\times ∼ 19.5 × and ∼37×\sim 37\times ∼ 37 × speed-up factors for the Haralick feature extraction and for the SOM execution, respectively, compared to the corresponding C++ coded sequential versions. Such computational results point out the potential of GPUs in the clinical research.</jats:p

    Emerging ensembles of kinetic parameters to identify experimentally observed phenotypes

    Get PDF
    Background: Determining the value of kinetic constants for a metabolic system in the exact physiological conditions is an extremely hard task. However, this kind of information is of pivotal relevance to effectively simulate a biological phenomenon as complex as metabolism. Results: To overcome this issue, we propose to investigate emerging properties of ensembles of sets of kinetic constants leading to the biological readout observed in different experimental conditions. To this aim, we exploit information retrievable from constraint-based analyses (i.e. metabolic flux distributions at steady state) with the goal to generate feasible values for kinetic constants exploiting the mass action law. The sets retrieved from the previous step will be used to parametrize a mechanistic model whose simulation will be performed to reconstruct the dynamics of the system (until reaching the metabolic steady state) for each experimental condition. Every parametrization that is in accordance with the expected metabolic phenotype is collected in an ensemble whose features are analyzed to determine the emergence of properties of a phenotype. In this work we apply the proposed approach to identify ensembles of kinetic parameters for five metabolic phenotypes of E. Coli, by analyzing five different experimental conditions associated with the ECC2comp model recently published by Hädicke and collaborators. Conclusions: Our results suggest that the parameter values of just few reactions are responsible for the emergence of a metabolic phenotype. Notably, in contrast with constraint-based approaches such as Flux Balance Analysis, the methodology used in this paper does not require to assume that metabolism is optimizing towards a specific goal

    Exploring the Higgs Portal with 10/fb at the LHC

    Full text link
    We consider the impact of new exotic colored and/or charged matter interacting through the Higgs portal on Standard Model Higgs boson searches at the LHC. Such Higgs portal couplings can induce shifts in the effective Higgs-gluon-gluon and Higgs-photon-photon couplings, thus modifying the Higgs production and decay patterns. We consider two possible interpretations of the current LHC Higgs searches based on ~ 5/fb of data at each detector: 1) a Higgs boson in the mass range (124-126) GeV and 2) a `hidden' heavy Higgs boson which is underproduced due to the suppression of its gluon fusion production cross section. We first perform a model independent analysis of the allowed sizes of such shifts in light of the current LHC data. As a class of possible candidates for new physics which gives rise to such shifts, we investigate the effects of new scalar multiplets charged under the Standard Model gauge symmetries. We determine the scalar parameter space that is allowed by current LHC Higgs searches, and compare with complementary LHC searches that are sensitive to the direct production of colored scalar states.Comment: 27 pages, 11 figures; v2: references added, correction to scalar form factor, numerical results updated with Moriond 2012 data, conclusions unchange
    • …
    corecore