118 research outputs found

    3D Automatic Segmentation Method for Retinal Optical Coherence Tomography Volume Data Using Boundary Surface Enhancement

    Full text link
    With the introduction of spectral-domain optical coherence tomography (SDOCT), much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of 3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of data consisted of 97 OCT images with a resolution of 496 512; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.Comment: 27 pages, 19 figure

    Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach

    Full text link
    Recent text-to-image generation models have demonstrated impressive capability of generating text-aligned images with high fidelity. However, generating images of novel concept provided by the user input image is still a challenging task. To address this problem, researchers have been exploring various methods for customizing pre-trained text-to-image generation models. Currently, most existing methods for customizing pre-trained text-to-image generation models involve the use of regularization techniques to prevent over-fitting. While regularization will ease the challenge of customization and leads to successful content creation with respect to text guidance, it may restrict the model capability, resulting in the loss of detailed information and inferior performance. In this work, we propose a novel framework for customized text-to-image generation without the use of regularization. Specifically, our proposed framework consists of an encoder network and a novel sampling method which can tackle the over-fitting problem without the use of regularization. With the proposed framework, we are able to customize a large-scale text-to-image generation model within half a minute on single GPU, with only one image provided by the user. We demonstrate in experiments that our proposed framework outperforms existing methods, and preserves more fine-grained details

    Blockchain Network Analysis: A Comparative Study of Decentralized Banks

    Full text link
    Decentralized finance (DeFi) is known for its unique mechanism design, which applies smart contracts to facilitate peer-to-peer transactions. The decentralized bank is a typical DeFi application. Ideally, a decentralized bank should be decentralized in the transaction. However, many recent studies have found that decentralized banks have not achieved a significant degree of decentralization. This research conducts a comparative study among mainstream decentralized banks. We apply core-periphery network features analysis using the transaction data from four decentralized banks, Liquity, Aave, MakerDao, and Compound. We extract six features and compare the banks' levels of decentralization cross-sectionally. According to the analysis results, we find that: 1) MakerDao and Compound are more decentralized in the transactions than Aave and Liquity. 2) Although decentralized banking transactions are supposed to be decentralized, the data show that four banks have primary external transaction core addresses such as Huobi, Coinbase, Binance, etc. We also discuss four design features that might affect network decentralization. Our research contributes to the literature at the interface of decentralized finance, financial technology (Fintech), and social network analysis and inspires future protocol designs to live up to the promise of decentralized finance for a truly peer-to-peer transaction network

    Summer extreme consecutive dry days over Northeast China in the changing climate: Observed features and projected future changes based on CESM-LE

    Get PDF
    Northeast China (NEC) is a major crop base in East Asia, and summer drought is one of the climate extremes that significantly influences NEC agricultural production. Therefore, understanding the response of NEC summer drought to global warming is of significance. In this study, based on observation and large-ensemble simulations of the Community Earth System Model (CESM-LE), the variabilities in summer extreme consecutive dry days (CDDs) over NEC are investigated in the present and future climate. In the observation, the NEC summer extreme CDDs showed an increasing trend during the past half century and experienced a significant interdecadal change around the middle 1990s, which is mainly due to the change in the anticyclone over Lake Baikal-Northeast Asia. The anticyclone-related anomalous downward motion and moisture divergence provided favorable conditions for increased summer CDDs over NEC. The CESM-LE multimember ensemble (MME) simulation could reproduce the change in NEC summer extreme CDDs and its related atmospheric circulations, indicating that the observed change in NEC summer extreme CDDs could be largely contributed by anthropogenic forcing. In the future warmer climate, the NEC summer extreme CDDs are projected to show interdecadal variability, which increase by approximately 6.7% in the early 21st century (2020–2030), then decrease by approximately 0.3% in the middle to late 21st century (2040–2080), and further increase by approximately 2.1% in the late 21st century (2085–2100). In addition, the projected changes in the anticyclone over Lake Baikal-Northeast Asia show a similar feature to that of the NEC summer extreme CDDs, which might further provide some confidence in the projection of the NEC summer extreme CDDs due to the physical connection between CDDs and anticyclone in the future

    LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding

    Full text link
    Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.Comment: Preprint. Work in progres

    Force: Making 4PC > 4 × PC in Privacy Preserving Machine Learning on GPU

    Get PDF
    Tremendous efforts have been made to improve the efficiency of secure Multi-Party Computation (MPC), which allows n ≥ 2 parties to jointly evaluate a target function without leaking their own private inputs. It has been confirmed by previous researchers that 3-Party Computation (3PC) and outsourcing computations to GPUs can lead to huge performance improvement of MPC in computationally intensive tasks such as Privacy-Preserving Machine Learning (PPML). A natural question to ask is whether super-linear performance gain is possible for a linear increase in resources. In this paper, we give an affirmative answer to this question. We propose Force, an extremely efficient 4PC system for PPML. To the best of our knowledge, each party in Force enjoys the least number of local computations and lowest data exchanges between parties. This is achieved by introducing a new sharing type X -share along with MPC protocols in privacy-preserving training and inference that are semi-honest secure with an honest-majority. Our contribution does not stop at theory. We also propose engineering optimizations and verify the high performance of the protocols with implementation and experiments. By comparing the results with state-of-the-art researches such as Cheetah, Piranha, CryptGPU and CrypTen, we showcase that Force is sound and extremely efficient, as it can improve the PPML performance by a factor of 2 to 1200 compared with other latest 2PC, 3PC and 4PC syste

    The oral microbiome of patients with ischemic stroke predicts their severity and prognosis

    Get PDF
    Background and objectivesStroke is a common group of cerebrovascular diseases that can lead to brain damage or death. Several studies have shown a close link between oral health and stroke. However, the oral microbiome profiling of ischemic stroke (IS) and its potential clinical implication are unclear. This study aimed to describe the oral microbiota composition of IS, the high risk of IS, and healthy individuals and to profile the relationship between microbiota and IS prognosis.MethodsThis observational study recruited three groups: IS, high-risk IS (HRIS), and healthy control (HC) individuals. Clinical data and saliva were collected from participants. The modified Rankin scale score after 90 days was used to assess the prognosis of stroke. Extracted DNA from saliva and performed 16S ribosomal ribonucleic acid (rRNA) gene amplicon sequencing. Sequence data were analyzed using QIIME2 and R packages to evaluate the association between the oral microbiome and stroke.ResultsA total of 146 subjects were enrolled in this study according to the inclusion criteria. Compared with HC, HRIS and IS demonstrated a progressive increase trend in Chao1, observed species richness, and Shannon and Simpson diversity index. On the basis of permutational multivariate analysis of variance, the data indicate a great variation in the saliva microbiota composition between HC and HRIS (F = 2.40, P < 0.001), HC and IS (F = 5.07, P < 0.001), and HRIS and IS (F = 2.79, P < 0.001). The relative abundance of g_Streptococcus, g_Prevotella, g_Veillonella, g_Fusobacterium, and g_Treponema was higher in HRIS and IS compared with that in HC. Furthermore, we constructed the predictive model by differential genera to effectively distinguish patients with IS with poor 90-day prognoses from those with good (area under the curve = 79.7%; 95% CI, 64.41%–94.97%; p < 0.01).DiscussionIn summary, the oral salivary microbiome of HRIS and IS subjects have a higher diversity, and the differential bacteria have some predictive value for the severity and prognosis of IS. Oral microbiota may be used as potential biomarkers in patients with IS
    • …
    corecore