550 research outputs found

    EviPrompt: A Training-Free Evidential Prompt Generation Method for Segment Anything Model in Medical Images

    Full text link
    Medical image segmentation has immense clinical applicability but remains a challenge despite advancements in deep learning. The Segment Anything Model (SAM) exhibits potential in this field, yet the requirement for expertise intervention and the domain gap between natural and medical images poses significant obstacles. This paper introduces a novel training-free evidential prompt generation method named EviPrompt to overcome these issues. The proposed method, built on the inherent similarities within medical images, requires only a single reference image-annotation pair, making it a training-free solution that significantly reduces the need for extensive labeling and computational resources. First, to automatically generate prompts for SAM in medical images, we introduce an evidential method based on uncertainty estimation without the interaction of clinical experts. Then, we incorporate the human prior into the prompts, which is vital for alleviating the domain gap between natural and medical images and enhancing the applicability and usefulness of SAM in medical scenarios. EviPrompt represents an efficient and robust approach to medical image segmentation, with evaluations across a broad range of tasks and modalities confirming its efficacy

    Semi‐supervised joint learning for longitudinal clinical events classification using neural network models

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163377/2/sta4305.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163377/1/sta4305_am.pd

    High Dynamic Range Image Reconstruction via Deep Explicit Polynomial Curve Estimation

    Full text link
    Due to limited camera capacities, digital images usually have a narrower dynamic illumination range than real-world scene radiance. To resolve this problem, High Dynamic Range (HDR) reconstruction is proposed to recover the dynamic range to better represent real-world scenes. However, due to different physical imaging parameters, the tone-mapping functions between images and real radiance are highly diverse, which makes HDR reconstruction extremely challenging. Existing solutions can not explicitly clarify a corresponding relationship between the tone-mapping function and the generated HDR image, but this relationship is vital when guiding the reconstruction of HDR images. To address this problem, we propose a method to explicitly estimate the tone mapping function and its corresponding HDR image in one network. Firstly, based on the characteristics of the tone mapping function, we construct a model by a polynomial to describe the trend of the tone curve. To fit this curve, we use a learnable network to estimate the coefficients of the polynomial. This curve will be automatically adjusted according to the tone space of the Low Dynamic Range (LDR) image, and reconstruct the real HDR image. Besides, since all current datasets do not provide the corresponding relationship between the tone mapping function and the LDR image, we construct a new dataset with both synthetic and real images. Extensive experiments show that our method generalizes well under different tone-mapping functions and achieves SOTA performance

    Efficiently Hardening SGX Enclaves against Memory Access Pattern Attacks via Dynamic Program Partitioning

    Full text link
    Intel SGX is known to be vulnerable to a class of practical attacks exploiting memory access pattern side-channels, notably page-fault attacks and cache timing attacks. A promising hardening scheme is to wrap applications in hardware transactions, enabled by Intel TSX, that return control to the software upon unexpected cache misses and interruptions so that the existing side-channel attacks exploiting these micro-architectural events can be detected and mitigated. However, existing hardening schemes scale only to small-data computation, with a typical working set smaller than one or few times (e.g., 88 times) of a CPU data cache. This work tackles the data scalability and performance efficiency of security hardening schemes of Intel SGX enclaves against memory-access pattern side channels. The key insight is that the size of TSX transactions in the target computation is critical, both performance- and security-wise. Unlike the existing designs, this work dynamically partitions target computations to enlarge transactions while avoiding aborts, leading to lower performance overhead and improved side-channel security. We materialize the dynamic partitioning scheme and build a C++ library to monitor and model cache utilization at runtime. We further build a data analytical system using the library and implement various external oblivious algorithms. Performance evaluation shows that our work can effectively increase transaction size and reduce the execution time by up to two orders of magnitude compared with the state-of-the-art solutions
    corecore