104 research outputs found

    Randomized Dynamic Mode Decomposition

    Full text link
    This paper presents a randomized algorithm for computing the near-optimal low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging techniques to compute low-rank matrix approximations at a fraction of the cost of deterministic algorithms, easing the computational challenges arising in the area of `big data'. The idea is to derive a small matrix from the high-dimensional data, which is then used to efficiently compute the dynamic modes and eigenvalues. The algorithm is presented in a modular probabilistic framework, and the approximation quality can be controlled via oversampling and power iterations. The effectiveness of the resulting randomized DMD algorithm is demonstrated on several benchmark examples of increasing complexity, providing an accurate and efficient approach to extract spatiotemporal coherent structures from big data in a framework that scales with the intrinsic rank of the data, rather than the ambient measurement dimension. For this work we assume that the dynamics of the problem under consideration is evolving on a low-dimensional subspace that is well characterized by a fast decaying singular value spectrum

    Sparse approximation of multivariate functions from small datasets via weighted orthogonal matching pursuit

    Full text link
    We show the potential of greedy recovery strategies for the sparse approximation of multivariate functions from a small dataset of pointwise evaluations by considering an extension of the orthogonal matching pursuit to the setting of weighted sparsity. The proposed recovery strategy is based on a formal derivation of the greedy index selection rule. Numerical experiments show that the proposed weighted orthogonal matching pursuit algorithm is able to reach accuracy levels similar to those of weighted ℓ1\ell^1 minimization programs while considerably improving the computational efficiency for small values of the sparsity level

    Surgical Models of Liver Regeneration in Pigs. A Practical Review of the Literature for Researchers

    Get PDF
    The remarkable capacity of regeneration of the liver is well known, although the involved mechanisms are far from being understood. Furthermore, limits concerning the residual functional mass of the liver remain critical in both fields of hepatic resection and transplantation. The aim of the present study was to review the surgical experiments regarding liver regeneration in pigs to promote experimental methodological standardization. The Pubmed, Medline, Scopus, and Cochrane Library databases were searched. Studies evaluating liver regeneration through surgical experiments performed on pigs were included. A total of 139 titles were screened, and 41 articles were included in the study, with 689 pigs in total. A total of 29 studies (71% of all) had a survival design, with an average study duration of 13 days. Overall, 36 studies (88%) considered partial hepatectomy, of which four were an associating liver partition and portal vein ligation for staged hepatectomy (ALPPS). Remnant liver volume ranged from 10% to 60%. Only 2 studies considered a hepatotoxic pre-treatment, while 25 studies evaluated additional liver procedures, such as stem cell application, ischemia/reperfusion injury, portal vein modulation, liver scaffold application, bio-artificial, and pharmacological liver treatment. Only nine authors analysed how cytokines and growth factors changed in response to liver resection. The most used imaging system to evaluate liver volume was CT-scan volumetry, even if performed only by nine authors. The pig represents one of the best animal models for the study of liver regeneration. However, it remains a mostly unexplored field due to the lack of experiments reproducing the chronic pathological aspects of the liver and the heterogeneity of existing studies

    A statistical learning strategy for closed-loop control of fluid flows

    Get PDF
    This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system’s dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz’63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well

    Reproducible Cancer Biomarker Discovery in SELDI-TOF MS Using Different Pre-Processing Algorithms

    Get PDF
    BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS) studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE) peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR) control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers
    • 

    corecore