314 research outputs found

    Data-driven model construction for anisotropic dynamics of active matter

    Get PDF
    The dynamics of cellular pattern formation is crucial for understanding embryonic development and tissue morphogenesis. Recent studies have shown that human dermal fibroblasts cultured on liquid crystal elastomers can exhibit an increase in orientational alignment over time, accompanied by cell proliferation, under the influence of the weak guidance of a molecularly aligned substrate. However, a comprehensive understanding of how this order arises remains largely unknown. This knowledge gap may be attributed, in part, to a scarcity of mechanistic models that can capture the temporal progression of the complex nonequilibrium dynamics during the cellular alignment process. The orientational alignment occurs primarily when cells reach a high density near confluence. Therefore, for accurate modeling, it is crucial to take into account both the cell-cell interaction term and the influence from the substrate, acting as a one-body external potential term. To fill in this gap, we develop a hybrid procedure that utilizes statistical learning approaches to extend the state-of-the-art physics models for quantifying both effects. We develop a more efficient way to perform feature selection that avoids testing all feature combinations through simulation. The maximum likelihood estimator of the model was derived and implemented in computationally scalable algorithms for model calibration and simulation. By including these features, such as the non-Gaussian, anisotropic fluctuations, and limiting alignment interaction only to neighboring cells with the same velocity direction, this model quantitatively reproduce the key system-level parameters--the temporal progression of the velocity orientational order parameters and the variability of velocity vectors, whereas models missing any of the features fail to capture these temporally dependent parameters.Comment: 20 pages, 14 figure

    Factors influencing immunogenicity and safety of SARS-CoV-2 vaccine in liver transplantation recipients: a systematic review and meta-analysis

    Get PDF
    BackgroundThis review summarizes the factors influencing the efficacy and safety of the COVID-19 vaccine in LTR through meta-analysis, hoping to provide strategies for vaccine use.MethodsElectronic databases were screened for studies on mRNA vaccines in LTR. The primary outcome was the pooled seroconversion rate, and the secondary outcome was the incidence of adverse events+breakthrough infections. Subgroup analyses were made based on BMI, associated comorbidities, presence of baseline leukopenia, time since transplant, and drugs used.ResultIn total, 31 articles got included. The pooled seroconversion rate after at least two doses of SARS-CoV-2 vaccination was 72% (95% CI [0.52-0.91). With significant heterogeneity among studies I2 = 99.9%, the seroconversion rate was about 72% (95%CI [0.66-0.75]), from the studies reporting two doses of vaccine slightly higher around 75%(95%CI [0.29-1.22]) from studies reporting three doses. The pooled seroconversion rate within the lower to normal BMI group was 74% (95% CI [0.22-1.27], Pi=0.005) against 67% (95% CI [0.52-0.81], Pi=0.000) in the high BMI group. The pooled seroconversion rate in the ‘‘positive leukopenia’’ group was the lowest, 59%. Leukopenia could influence the vaccine seroconversion rate in LTR. From the time since transplant analysis after setting seven years as cut off point, the pooled seroconversion rate after at least two doses of COVID-19 vaccination was 53% (95% CI [0.18-0.83], P=0.003, I2 = 99.6%) in <7years group and 83% (95% CI [0.76-0.90], P=0.000 I2 = 95.7%) in > 7years group. The only time since transplantation had reached statistical significance to be considered a risk factor predictor of poor serological response (OR=1.27 95%CI [1.03-1.55], P=0.024). The breakthrough infection rate after vaccination was very low2% (95% CI 0.01-0.03, I2 = 63.0%), and the overall incidence of adverse events, which included mainly pain at the injection site and fatigue, was 18% (95%CI [0.11-0.25], I2 = 98.6%, Pi=0.000).ConclusionThe seroconversion rate in LTR vaccinated with at least two doses of mRNA COVID-19 vaccine could be significantly affected by the vaccine type, immunosuppressant used, BMI, leukopenia, associated comorbidities, and time since transplantation. Nevertheless, booster doses are still recommended for LTR

    ADTR: Anomaly Detection Transformer with Feature Reconstruction

    Full text link
    Anomaly detection with only prior knowledge from normal samples attracts more attention because of the lack of anomaly samples. Existing CNN-based pixel reconstruction approaches suffer from two concerns. First, the reconstruction source and target are raw pixel values that contain indistinguishable semantic information. Second, CNN tends to reconstruct both normal samples and anomalies well, making them still hard to distinguish. In this paper, we propose Anomaly Detection TRansformer (ADTR) to apply a transformer to reconstruct pre-trained features. The pre-trained features contain distinguishable semantic information. Also, the adoption of transformer limits to reconstruct anomalies well such that anomalies could be detected easily once the reconstruction fails. Moreover, we propose novel loss functions to make our approach compatible with the normal-sample-only case and the anomaly-available case with both image-level and pixel-level labeled anomalies. The performance could be further improved by adding simple synthetic or external irrelevant anomalies. Extensive experiments are conducted on anomaly detection datasets including MVTec-AD and CIFAR-10. Our method achieves superior performance compared with all baselines.Comment: Accepted by ICONIP 202

    Fast Full-frame Video Stabilization with Iterative Optimization

    Full text link
    Video stabilization refers to the problem of transforming a shaky video into a visually pleasing one. The question of how to strike a good trade-off between visual quality and computational speed has remained one of the open challenges in video stabilization. Inspired by the analogy between wobbly frames and jigsaw puzzles, we propose an iterative optimization-based learning approach using synthetic datasets for video stabilization, which consists of two interacting submodules: motion trajectory smoothing and full-frame outpainting. First, we develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field. The confidence map associated with the estimated optical flow is exploited to guide the search for shared regions through backpropagation. Second, we take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views. An important new insight brought about by our iterative optimization approach is that the target video can be interpreted as the fixed point of nonlinear mapping for video stabilization. We formulate video stabilization as a problem of minimizing the amount of jerkiness in motion trajectories, which guarantees convergence with the help of fixed-point theory. Extensive experimental results are reported to demonstrate the superiority of the proposed approach in terms of computational speed and visual quality. The code will be available on GitHub.Comment: Accepted by ICCV202

    Identification of diagnostic biomarkers and immune cell infiltration in coronary artery disease by machine learning, nomogram, and molecular docking

    Get PDF
    BackgroundCoronary artery disease (CAD) is still a lethal disease worldwide. This study aims to identify clinically relevant diagnostic biomarker in CAD and explore the potential medications on CAD.MethodsGSE42148, GSE180081, and GSE12288 were downloaded as the training and validation cohorts to identify the candidate genes by constructing the weighted gene co-expression network analysis. Functional enrichment analysis was utilized to determine the functional roles of these genes. Machine learning algorithms determined the candidate biomarkers. Hub genes were then selected and validated by nomogram and the receiver operating curve. Using CIBERSORTx, the hub genes were further discovered in relation to immune cell infiltrability, and molecules associated with immune active families were analyzed by correlation analysis. Drug screening and molecular docking were used to determine medications that target the four genes.ResultsThere were 191 and 230 key genes respectively identified by the weighted gene co-expression network analysis in two modules. A total of 421 key genes found enriched pathways by functional enrichment analysis. Candidate immune-related genes were then screened and identified by the random forest model and the eXtreme Gradient Boosting algorithm. Finally, four hub genes, namely, CSF3R, EED, HSPA1B, and IL17RA, were obtained and used to establish the nomogram model. The receiver operating curve, the area under curve, and the calibration curve were all used to validate the accuracy and usefulness of the diagnostic model. Immune cell infiltrating was examined, and CAD patients were then divided into high- and low-expression groups for further gene set enrichment analysis. Through targeting the hub genes, we also found potential drugs for anti-CAD treatment by using the molecular docking method.ConclusionsCSF3R, EED, HSPA1B, and IL17RA are potential diagnostic biomarkers for CAD. CAD pathogenesis is greatly influenced by patterns of immune cell infiltration. Promising drugs offers new prospects for the development of CAD therapy

    Human Preferences as Dueling Bandits

    Get PDF
    © 2022 Association for Computing Machinery. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, http://dx.doi.org/10.1145/3477495.3531991The dramatic improvements in core information retrieval tasks engendered by neural rankers create a need for novel evaluation methods. If every ranker returns highly relevant items in the top ranks, it becomes difficult to recognize meaningful differences between them and to build reusable test collections. Several recent papers explore pairwise preference judgments as an alternative to traditional graded relevance assessments. Rather than viewing items one at a time, assessors view items side-by-side and indicate the one that provides the better response to a query, allowing fine-grained distinctions. If we employ preference judgments to identify the probably best items for each query, we can measure rankers by their ability to place these items as high as possible. We frame the problem of finding best items as a dueling bandits problem. While many papers explore dueling bandits for online ranker evaluation via interleaving, they have not been considered as a framework for offline evaluation via human preference judgments. We review the literature for possible solutions. For human preference judgments, any usable algorithm must tolerate ties, since two items may appear nearly equal to assessors, and it must minimize the number of judgments required for any specific pair, since each such comparison requires an independent assessor. Since the theoretical guarantees provided by most algorithms depend on assumptions that are not satisfied by human preference judgments, we simulate selected algorithms on representative test cases to provide insight into their practical utility. Based on these simulations, one algorithm stands out for its potential. Our simulations suggest modifications to further improve its performance. Using the modified algorithm, we collect over 10,000 preference judgments for pools derived from submissions to the TREC 2021 Deep Learning Track, confirming its suitability. We test the idea of best-item evaluation and suggest ideas for further theoretical and practical progress.We thank Mark Smucker, Gautam Kamath, and Ben Carterette for their feedback. This research was supported by the Natural Science and Engineering Research Council of Canada through its Discovery Grants program

    SoMeLVLM: A Large Vision Language Model for Social Media Processing

    Full text link
    The growth of social media, characterized by its multimodal nature, has led to the emergence of diverse phenomena and challenges, which calls for an effective approach to uniformly solve automated tasks. The powerful Large Vision Language Models make it possible to handle a variety of tasks simultaneously, but even with carefully designed prompting methods, the general domain models often fall short in aligning with the unique speaking style and context of social media tasks. In this paper, we introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM), which is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation. SoMeLVLM is designed to understand and generate realistic social media behavior. We have developed a 654k multimodal social media instruction-tuning dataset to support our cognitive framework and fine-tune our model. Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks. Further analysis shows its significant advantages over baselines in terms of cognitive abilities
    corecore