14,598 research outputs found
Recipes for calibration and validation of agent-based models in cancer biomedicine
Computational models and simulations are not just appealing because of their
intrinsic characteristics across spatiotemporal scales, scalability, and
predictive power, but also because the set of problems in cancer biomedicine
that can be addressed computationally exceeds the set of those amenable to
analytical solutions. Agent-based models and simulations are especially
interesting candidates among computational modelling strategies in cancer
research due to their capabilities to replicate realistic local and global
interaction dynamics at a convenient and relevant scale. Yet, the absence of
methods to validate the consistency of the results across scales can hinder
adoption by turning fine-tuned models into black boxes. This review compiles
relevant literature to explore strategies to leverage high-fidelity simulations
of multi-scale, or multi-level, cancer models with a focus on validation
approached as simulation calibration. We argue that simulation calibration goes
beyond parameter optimization by embedding informative priors to generate
plausible parameter configurations across multiple dimensions
Structural Prediction of Protein–Protein Interactions by Docking: Application to Biomedical Problems
A huge amount of genetic information is available thanks to the recent advances in sequencing technologies and the larger computational capabilities, but the interpretation of such genetic data at phenotypic level remains elusive. One of the reasons is that proteins are not acting alone, but are specifically interacting with other proteins and biomolecules, forming intricate interaction networks that are essential for the majority of cell processes and pathological conditions. Thus, characterizing such interaction networks is an important step in understanding how information flows from gene to phenotype. Indeed, structural characterization of protein–protein interactions at atomic resolution has many applications in biomedicine, from diagnosis and vaccine design, to drug discovery. However, despite the advances of experimental structural determination, the number of interactions for which there is available structural data is still very small. In this context, a complementary approach is computational modeling of protein interactions by docking, which is usually composed of two major phases: (i) sampling of the possible binding modes between the interacting molecules and (ii) scoring for the identification of the correct orientations. In addition, prediction of interface and hot-spot residues is very useful in order to guide and interpret mutagenesis experiments, as well as to understand functional and mechanistic aspects of the interaction. Computational docking is already being applied to specific biomedical problems within the context of personalized medicine, for instance, helping to interpret pathological mutations involved in protein–protein interactions, or providing modeled structural data for drug discovery targeting protein–protein interactions.Spanish Ministry of Economy grant number BIO2016-79960-R; D.B.B. is supported by a
predoctoral fellowship from CONACyT; M.R. is supported by an FPI fellowship from the
Severo Ochoa program. We are grateful to the Joint BSC-CRG-IRB Programme in
Computational Biology.Peer ReviewedPostprint (author's final draft
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
Nanoinformatics: developing new computing applications for nanomedicine
Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Combined population dynamics and entropy modelling supports patient stratification in chronic myeloid leukemia
Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer
progression, biomarker identification and the design of individualized therapies. Using chronic
myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined
population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification
at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patientderived
gene expression entropy separated established CML progression stages and uncovered
additional heterogeneity within disease stages. Importantly, our patient data informed model enables
quantitative approximation of individual patients’ disease history within chronic phase (CP) and
significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized
and genome-informed disease progression risk assessment that is independent and complementary to
conventional measures of CML disease burden and prognosis
Mathematical biomedicine and modeling avascular tumor growth
In this chapter we review existing continuum models of avascular tumor growth, explaining howthey are inter related and the biophysical insight that they provide. The models range in complexity and include one-dimensional studies of radiallysymmetric growth, and two-dimensional models of tumor invasion in which the tumor is assumed to comprise a single population of cells. We also present more detailed, multiphase models that allow for tumor heterogeneity. The chapter concludes with a summary of the different continuum approaches and a discussion of the theoretical challenges that lie ahead
Recommended from our members
Computational uncertainty in hybrid atomistic-continuum frameworks
This paper was presented at the 3rd Micro and Nano Flows Conference (MNF2011), which was held at the Makedonia Palace Hotel, Thessaloniki in Greece. The conference was organised by Brunel University and supported by the Italian Union of Thermofluiddynamics, Aristotle University of Thessaloniki, University of Thessaly, IPEM, the Process Intensification Network, the Institution of Mechanical Engineers, the Heat Transfer Society, HEXAG - the Heat Exchange Action Group, and the Energy Institute.Over the past decade micro and nanofluidics emerged as vital tools in the ongoing drive towards the development of nano-scale analysis and manufacturing systems. Accurate numerical modelling of the phenomena involved at these scales is ssential in order to speed up the industrial design process for nanotechnology. However a parameter often ignored in hybrid simulations is the uncertainty level introduced in the numerical modelling of phenomena taking place at micro and nanoscales. The main interest of the present study is the propagation of the inherent atomistic fluctuations to the continuum solver in the case of multiscale modelling and hybrid solvers
An update on statistical boosting in biomedicine
Statistical boosting algorithms have triggered a lot of research during the
last decade. They combine a powerful machine-learning approach with classical
statistical modelling, offering various practical advantages like automated
variable selection and implicit regularization of effect estimates. They are
extremely flexible, as the underlying base-learners (regression functions
defining the type of effect for the explanatory variables) can be combined with
any kind of loss function (target function to be optimized, defining the type
of regression setting). In this review article, we highlight the most recent
methodological developments on statistical boosting regarding variable
selection, functional regression and advanced time-to-event modelling.
Additionally, we provide a short overview on relevant applications of
statistical boosting in biomedicine
- …