33 research outputs found

    Recurrent Events Modeling Based on a Reflected Brownian Motion with Application to Hypoglycemia

    Full text link
    Patients with type 2 diabetes need to closely monitor blood sugar levels as their routine diabetes self-management. Although many treatment agents aim to tightly control blood sugar, hypoglycemia often stands as an adverse event. In practice, patients can observe hypoglycemic events more easily than hyperglycemic events due to the perception of neurogenic symptoms. We propose to model each patient's observed hypoglycemic event as a lower-boundary crossing event for a reflected Brownian motion with an upper reflection barrier. The lower-boundary is set by clinical standards. To capture patient heterogeneity and within-patient dependence, covariates and a patient level frailty are incorporated into the volatility and the upper reflection barrier. This framework provides quantification for the underlying glucose level variability, patients heterogeneity, and risk factors' impact on glucose. We make inferences based on a Bayesian framework using Markov chain Monte Carlo. Two model comparison criteria, the Deviance Information Criterion and the Logarithm of the Pseudo-Marginal Likelihood, are used for model selection. The methodology is validated in simulation studies. In analyzing a dataset from the diabetic patients in the DURABLE trial, our model provides adequate fit, generates data similar to the observed data, and offers insights that could be missed by other models

    A comprehensive AI model development framework for consistent Gleason grading

    Get PDF
    Background: Artificial Intelligence(AI)-based solutions for Gleason grading hold promise for pathologists, while image quality inconsistency, continuous data integration needs, and limited generalizability hinder their adoption and scalability. Methods: We present a comprehensive digital pathology workflow for AI-assisted Gleason grading. It incorporates A!MagQC (image quality control), A!HistoClouds (cloud-based annotation), Pathologist-AI Interaction (PAI) for continuous model improvement, Trained on Akoya-scanned images only, the model utilizes color augmentation and image appearance migration to address scanner variations. We evaluate it on Whole Slide Images (WSI) from another five scanners and conduct validations with pathologists to assess AI efficacy and PAI. Results: Our model achieves an average F1 score of 0.80 on annotations and 0.71 Quadratic Weighted Kappa on WSIs for Akoya-scanned images. Applying our generalization solution increases the average F1 score for Gleason pattern detection from 0.73 to 0.88 on images from other scanners. The model accelerates Gleason scoring time by 43% while maintaining accuracy. Additionally, PAI improve annotation efficiency by 2.5 times and led to further improvements in model performance. Conclusions: This pipeline represents a notable advancement in AI-assisted Gleason grading for improved consistency, accuracy, and efficiency. Unlike previous methods limited by scanner specificity, our model achieves outstanding performance across diverse scanners. This improvement paves the way for its seamless integration into clinical workflows

    real time edit propagation by efficient sampling

    No full text
    It is popular to edit the appearance of images using strokes, owing to their ease of use and convenience of conveying the user's intention. However, propagating the user inputs to the rest of the images requires solving an enormous optimization problem, which is very time consuming, thus preventing its practical use. In this paper, a two-step edit propagation scheme is proposed, first to solve edits on clusters of similar pixels and then to interpolate individual pixel edits from cluster edits. The key in our scheme is that we use efficient stroke sampling to compute the affinity between image pixels and strokes. Based on this, our clustering does not need to be stroke-adaptive and thus the number of clusters is greatly reduced, resulting in a significant speedup. The proposed method has been tested on various images, and the results show that it is more than one order of magnitude faster than existing methods, while still achieving precise results compared with the ground truth. Moreover, its efficiency is not sensitive to the number of strokes, making it suitable for performing dense edits in practice.NSFC60773026, 60873182, 60833007It is popular to edit the appearance of images using strokes, owing to their ease of use and convenience of conveying the user's intention. However, propagating the user inputs to the rest of the images requires solving an enormous optimization problem, which is very time consuming, thus preventing its practical use. In this paper, a two-step edit propagation scheme is proposed, first to solve edits on clusters of similar pixels and then to interpolate individual pixel edits from cluster edits. The key in our scheme is that we use efficient stroke sampling to compute the affinity between image pixels and strokes. Based on this, our clustering does not need to be stroke-adaptive and thus the number of clusters is greatly reduced, resulting in a significant speedup. The proposed method has been tested on various images, and the results show that it is more than one order of magnitude faster than existing methods, while still achieving precise results compared with the ground truth. Moreover, its efficiency is not sensitive to the number of strokes, making it suitable for performing dense edits in practice

    Intent-aware image cloning

    No full text
    Currently, gradient domain methods are popular for producing seamless cloning of a source image patch into a target image. However, structure conflicts between the source image patch and the target image may generate artifacts, preventing the general practices. In this paper, we tackle the challenge by incorporating the users' intent in outlining the source patch, where the boundary drawn generally has different appearances from the objects of interest. We first reveal that artifacts exist in the over-included region, the region outside the objects of interest in the source patch. Then we use the diversity from the boundary to approximately distinguish the objects from the over-included region, and design a new algorithm to make the target image adaptively take effects in blending. So the structure conflicts can be efficiently suppressed to remove the artifacts around the objects of interest in the composite result. Moreover, we develop an interpolation measure to composite the final image rather than solving a Poisson equation, and speed up the interpolation by treating pixels in clusters and using hierarchical sampling techniques. Our method is simple to use for instant and high-quality image cloning, in which users only need to outline a region of interested objects to process. Our experimental results have demonstrated the effectiveness of our cloning method.Currently, gradient domain methods are popular for producing seamless cloning of a source image patch into a target image. However, structure conflicts between the source image patch and the target image may generate artifacts, preventing the general practices. In this paper, we tackle the challenge by incorporating the users' intent in outlining the source patch, where the boundary drawn generally has different appearances from the objects of interest. We first reveal that artifacts exist in the over-included region, the region outside the objects of interest in the source patch. Then we use the diversity from the boundary to approximately distinguish the objects from the over-included region, and design a new algorithm to make the target image adaptively take effects in blending. So the structure conflicts can be efficiently suppressed to remove the artifacts around the objects of interest in the composite result. Moreover, we develop an interpolation measure to composite the final image rather than solving a Poisson equation, and speed up the interpolation by treating pixels in clusters and using hierarchical sampling techniques. Our method is simple to use for instant and high-quality image cloning, in which users only need to outline a region of interested objects to process. Our experimental results have demonstrated the effectiveness of our cloning method
    corecore