2,826 research outputs found

    FPT algorithms to recognize well covered graphs

    Full text link
    Given a graph GG, let vc(G)vc(G) and vc+(G)vc^+(G) be the sizes of a minimum and a maximum minimal vertex covers of GG, respectively. We say that GG is well covered if vc(G)=vc+(G)vc(G)=vc^+(G) (that is, all minimal vertex covers have the same size). Determining if a graph is well covered is a coNP-complete problem. In this paper, we obtain O∗(2vc)O^*(2^{vc})-time and O∗(1.4656vc+)O^*(1.4656^{vc^+})-time algorithms to decide well coveredness, improving results of Boria et. al. (2015). Moreover, using crown decomposition, we show that such problems admit kernels having linear number of vertices. In 2018, Alves et. al. (2018) proved that recognizing well covered graphs is coW[2]-hard when the independence number α(G)=n−vc(G)\alpha(G)=n-vc(G) is the parameter. Contrasting with such coW[2]-hardness, we present an FPT algorithm to decide well coveredness when α(G)\alpha(G) and the degeneracy of the input graph GG are aggregate parameters. Finally, we use the primeval decomposition technique to obtain a linear time algorithm for extended P4P_4-laden graphs and (q,q−4)(q,q-4)-graphs, which is FPT parameterized by qq, improving results of Klein et al (2013).Comment: 15 pages, 2 figure

    Perivascular adipose tissue as a relevant fat depot for cardiovascular risk in obesity

    Get PDF
    Obesity is associated with increased risk of premature death, morbidity, and mortality from several cardiovascular diseases (CVDs), including stroke, coronary heart disease (CHD), myocardial infarction, and congestive heart failure. However, this is not a straightforward relationship. Although several studies have substantiated that obesity confers an independent and additive risk of all-cause and cardiovascular death, there is significant variability in these associations, with some lean individuals developing diseases and others remaining healthy despite severe obesity, the so-called metabolically healthy obese. Part of this variability has been attributed to the heterogeneity in both the distribution of body fat and the intrinsic properties of adipose tissue depots, including developmental origin, adipogenic and proliferative capacity, glucose and lipid metabolism, hormonal control, thermogenic ability, and vascularization. In obesity, these depot-specific differences translate into specific fat distribution patterns, which are closely associated with differential cardiometabolic risks. The adventitial fat layer, also known as perivascular adipose tissue (PVAT), is of major importance. Similar to the visceral adipose tissue, PVAT has a pathophysiological role in CVDs. PVAT influences vascular homeostasis by releasing numerous vasoactive factors, cytokines, and adipokines, which can readily target the underlying smooth muscle cell layers, regulating the vascular tone, distribution of blood flow, as well as angiogenesis, inflammatory processes, and redox status. In this review, we summarize the current knowledge and discuss the role of PVAT within the scope of adipose tissue as a major contributing factor to obesity-associated cardiovascular risk. Relevant clinical studies documenting the relationship between PVAT dysfunction and CVD with a focus on potential mechanisms by which PVAT contributes to obesity-related CVDs are pointed out

    Predictability of COVID-19 hospitalizations, intensive care unit admissions, and respiratory assistance in Portugal: Longitudinal Cohort study

    Get PDF
    Funding Information: The authors thank Portuguese Directorate General of Health (DGS) for providing the data. Data are available upon reasonable request. This work was supported by Fundação para a Ciência e a Tecnologia (FCT), through IDMEC, under LAETA project (UIDB/50022/2020), IPOscore (DSAIPA/DS/0042/2018) and Data2Help (DSAIPA/AI/ 0044/2018) projects, the contract CEECIND/01399/2017 to RSC, FCT/MCTES funds for INESC-ID (UIDB/50021/2020) and the Associate Laboratory for Green Chemistry - LAQV (UIDB/50006/2020 and UIDP/50006/2020).Background: In the face of the current COVID-19 pandemic, the timely prediction of upcoming medical needs for infected individuals enables better and quicker care provision when necessary and management decisions within health care systems. Objective: This work aims to predict the medical needs (hospitalizations, intensive care unit admissions, and respiratory assistance) and survivability of individuals testing positive for SARS-CoV-2 infection in Portugal. Methods: A retrospective cohort of 38,545 infected individuals during 2020 was used. Predictions of medical needs were performed using state-of-the-art machine learning approaches at various stages of a patient's cycle, namely, at testing (prehospitalization), at posthospitalization, and during postintensive care. A thorough optimization of state-of-the-art predictors was undertaken to assess the ability to anticipate medical needs and infection outcomes using demographic and comorbidity variables, as well as dates associated with symptom onset, testing, and hospitalization. Results: For the target cohort, 75% of hospitalization needs could be identified at the time of testing for SARS-CoV-2 infection. Over 60% of respiratory needs could be identified at the time of hospitalization. Both predictions had >50% precision. Conclusions: The conducted study pinpoints the relevance of the proposed predictive models as good candidates to support medical decisions in the Portuguese population, including both monitoring and in-hospital care decisions. A clinical decision support system is further provided to this end.publishersversionpublishe

    TriSig: Assessing the statistical significance of triclusters

    Full text link
    Tensor data analysis allows researchers to uncover novel patterns and relationships that cannot be obtained from matrix data alone. The information inferred from the patterns provides valuable insights into disease progression, bioproduction processes, weather fluctuations, and group dynamics. However, spurious and redundant patterns hamper this process. This work aims at proposing a statistical frame to assess the probability of patterns in tensor data to deviate from null expectations, extending well-established principles for assessing the statistical significance of patterns in matrix data. A comprehensive discussion on binomial testing for false positive discoveries is entailed at the light of: variable dependencies, temporal dependencies and misalignments, and \textit{p}-value corrections under the Benjamini-Hochberg procedure. Results gathered from the application of state-of-the-art triclustering algorithms over distinct real-world case studies in biochemical and biotechnological domains confer validity to the proposed statistical frame while revealing vulnerabilities of some triclustering searches. The proposed assessment can be incorporated into existing triclustering algorithms to mitigate false positive/spurious discoveries and further prune the search space, reducing their computational complexity. Availability: The code is freely available at https://github.com/JupitersMight/TriSig under the MIT license

    Scaling pattern mining through non-overlapping variable partitioning

    Full text link
    Biclustering algorithms play a central role in the biotechnological and biomedical domains. The knowledge extracted supports the extraction of putative regulatory modules, essential to understanding diseases, aiding therapy research, and advancing biological knowledge. However, given the NP-hard nature of the biclustering task, algorithms with optimality guarantees tend to scale poorly in the presence of high-dimensionality data. To this end, we propose a pipeline for clustering-based vertical partitioning that takes into consideration both parallelization and cross-partition pattern merging needs. Given a specific type of pattern coherence, these clusters are built based on the likelihood that variables form those patterns. Subsequently, the extracted patterns per cluster are then merged together into a final set of closed patterns. This approach is evaluated using five published datasets. Results show that in some of the tested data, execution times yield statistically significant improvements when variables are clustered together based on the likelihood to form specific types of patterns, as opposed to partitions based on dissimilarity or randomness. This work offers a departuring step on the efficiency impact of vertical partitioning criteria along the different stages of pattern mining and biclustering algorithms. Availability: All the code is freely available at https://github.com/JupitersMight/pattern_merge under the MIT license

    KiMoSys 2.0: an upgraded database for submitting, storing and accessing experimental data for kinetic modeling

    Get PDF
    UIDB/50006/2020 CEECIND/01399/2017 UIDB/50022/2020The KiMoSys (https://kimosys.org), launched in 2014, is a public repository of published experimental data, which contains concentration data of metabolites, protein abundances and flux data. It offers a web-based interface and upload facility to share data, making it accessible in structured formats, while also integrating associated kinetic models related to the data. In addition, it also supplies tools to simplify the construction process of ODE (Ordinary Differential Equations)-based models of metabolic networks. In this release, we present an update of KiMoSys with new data and several new features, including (i) an improved web interface, (ii) a new multi-filter mechanism, (iii) introduction of data visualization tools, (iv) the addition of downloadable data in machine-readable formats, (v) an improved data submission tool, (vi) the integration of a kinetic model simulation environment and (vii) the introduction of a unique persistent identifier system. We believe that this new version will improve its role as a valuable resource for the systems biology community. Database URL:  www.kimosys.org.publishersversionpublishe

    Evaluating the statistical significance of triclusters

    Get PDF
    This work was supported by Fundação para a Ciência e a Tecnologia (FCT) under the PhD grant to LA ( 2021.07759.BD ), INESC-ID plurianual and the contract CEECIND/01399/2017/ CP1462/CT0015 to RSC. The authors also wish to acknowledge the European Union’s Horizon BioLaMer project under grant agreement 101099487 . Publisher Copyright: © 2023Tensor data analysis allows researchers to uncover novel patterns and relationships that cannot be obtained from tabular data alone. The information inferred from multi-way patterns can offer valuable insights into disease progression, bioproduction processes, behavioral responses, weather fluctuations, or social dynamics. However, spurious patterns often hamper this process. This work aims at proposing a statistical frame to assess the probability of patterns in tensor data to deviate from null expectations, extending well-established principles for assessing the statistical significance of patterns in tabular data. A principled discussion on binomial testing to mitigate false positive discoveries is entailed at the light of: variable dependencies, temporal associations and misalignments, and multi-hypothesis correction. Results gathered from the application of triclustering algorithms over distinct real-world case studies in biotechnological domains confer validity to the proposed statistical frame while revealing vulnerabilities of reference triclustering searches. The proposed assessment can be incorporated into existing triclustering algorithms to minimize spurious occurrences, rank patterns, and further prune the search space, reducing their computational complexity.publishersversionpublishe

    DI2: prior-free and multi-item discretization of biological data and its applications

    Get PDF
    Funding Information: This work was supported by Fundação para a Ciência e a Tecnologia (FCT), through IDMEC, under LAETA project (UIDB/50022/2020), IPOscore with reference (DSAIPA/DS/0042/2018), and ILU (DSAIPA/DS/0111/2018). This work was further supported by the Associate Laboratory for Green Chemistry (LAQV), financed by national funds from FCT/MCTES (UIDB/50006/2020 and UIDP/50006/2020), INESC-ID plurianual (UIDB/50021/2020) and the contract CEECIND/01399/2017 to RSC. The funding entities did not partake in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.Background: A considerable number of data mining approaches for biomedical data analysis, including state-of-the-art associative models, require a form of data discretization. Although diverse discretization approaches have been proposed, they generally work under a strict set of statistical assumptions which are arguably insufficient to handle the diversity and heterogeneity of clinical and molecular variables within a given dataset. In addition, although an increasing number of symbolic approaches in bioinformatics are able to assign multiple items to values occurring near discretization boundaries for superior robustness, there are no reference principles on how to perform multi-item discretizations. Results: In this study, an unsupervised discretization method, DI2, for variables with arbitrarily skewed distributions is proposed. Statistical tests applied to assess differences in performance confirm that DI2 generally outperforms well-established discretizations methods with statistical significance. Within classification tasks, DI2 displays either competitive or superior levels of predictive accuracy, particularly delineate for classifiers able to accommodate border values. Conclusions: This work proposes a new unsupervised method for data discretization, DI2, that takes into account the underlying data regularities, the presence of outlier values disrupting expected regularities, as well as the relevance of border values. DI2 is available at https://github.com/JupitersMight/DI2publishersversionpublishe

    Mathematical modeling of recombinant Escherichia coli aerobic batch fermentations

    Get PDF
    In this work, three competing unstructured mathematical models for the biomass growth by recombinant E. coli strains with different acetate inhibition kinetics terms were evaluated for batch processes at constant temperature and pH. The models considered the dynamics of biomass growth, acetate accumulation, substrate consumption, Green Fluorescence Protein (GFP) production and three metabolic pathways for E. coli. Parameter estimation and model validation was carried out using the Systems Biology toolbox for Matlab (The Mathworks) with different initial glucose concentrations (5g/kg to 25g/kg) in a 5dm3 bioreactor. Model discrimination was based on the two model selection criterion (Akaike’s information criterion and normalized quadratic difference between the simulated and experimental data criterion). The first model described by Jerusalimsky approach is an approximation to the non-competitive substrate inhibition. Cockshott approach describes the inhibition at high acetate levels and Levenspiel considers the critical inhibitory acetate concentration that limits growth. Within the studied experimental range, Jerusalimsky model provided a good approximation between real and simulated values and should be favored. The model describes the experimental data satisfactorily well
    • …
    corecore