15 research outputs found

    Delivering the Strengthening Families Program to Native American Families During COVID-19: Lessons & Next Steps

    Get PDF
    The COVID-19 pandemic (COVID) of 2020 has forced adaptation for all Americans. Programs that serve Native American children and families are particularly critical during this time due to the disproportionate risks and disparities faced by this population. The objective of this qualitative evaluation is to gather adult participant feedback on strengths and needed changes with a telehealth adaptation of the Strengthening Families Program. This evaluation builds on previous knowledge of SFP group leadership which suggests that supportive helping relationships paired with dynamic flexibility are facilitators of effective family engagement. Participant feedback suggests that caregiver’s felt comfort, care, and genuine concern. In addition all participants noticed a difference in their families’ communication and relationships. Although tragic and challenging, the COVID-19 pandemic, forced a spotlight on barriers (limited internet access, social services, and food resources) that were needed to sustain participation and increase resilience among Native American residents in this mid-western state.  The individualized planning and checking in on every level which started out as a “how do we replicated this service” became about building resilience strategies for Native American families in this critical time in history

    Enhancement of Perivascular Spaces Using Densely Connected Deep Convolutional Neural Network

    Get PDF
    Perivascular spaces (PVS) in the human brain are related to various brain diseases. However, it is difficult to quantify them due to their thin and blurry appearance. In this paper, we introduce a deeplearning-based method, which can enhance a magnetic resonance (MR) image to better visualize the PVS. To accurately predict the enhanced image, we propose a very deep 3D convolutional neural network that contains densely connected networks with skip connections. The proposed networks can utilize rich contextual information derived from low-level to high-level features and effectively alleviate the gradient vanishing problem caused by the deep layers. The proposed method is evaluated on 17 7T MR images by a twofold cross-validation. The experiments show that our proposed network is much more effective to enhance the PVS than the previous PVS enhancement methods.1

    Broad humoral and cellular immunity elicited by one-dose mRNA vaccination 18 months after SARS-CoV-2 infection

    Get PDF
    Practical guidance is needed regarding the vaccination of coronavirus disease 2019 (COVID-19) convalescent individuals in resource-limited countries. It includes the number of vaccine doses that should be given to unvaccinated patients who experienced COVID-19 early in the pandemic. We recruited COVID-19 convalescent individuals who received one or two doses of an mRNA vaccine within 6 or around 18 months after a diagnosis of severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) infection. Their samples were assessed for IgG-binding or neutralizing activity and cell-mediated immune responses against SARS-CoV-2 wild-type and variants of concern. A total of 43 COVID-19 convalescent individuals were analyzed in the present study. The results showed that humoral and cellular immune responses against SARS-CoV-2 wild-type and variants of concern, including the Omicron variant, were comparable among patients vaccinated within 6 versus around 18 months. A second dose of vaccine did not significantly increase immune responses. One dose of mRNA vaccine should be considered sufficient to elicit a broad immune response even around 18 months after a COVID-19 diagnosis.This work was supported in part by the Bio & Medical Technology Develop‑ ment Program of the National Research Foundation (NRF) & funded by the Korean government (MSIT) (2021M3A9I2080496, to H.-R. Kim & W. B. Park), the Creative-Pioneering Researchers Program through Seoul National University (to C.-H. Lee), and the Seoul National University Hospital Research Fund (112021-5050 to P. G. Choe and 800-20220110 to C.-H. Lee)

    3차원 합성곱 신경망을 이용한 혈관영상 화질 향상

    No full text
    prohibitionContents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii I. INTRODUCTION 1 II. Related Works 3 1 Spatial Domain Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Transform Domain Approaches . . . . . . . . . . . . . . . . . . . . . . . 3 3 Learning Based Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 Medical Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . 4 III. METHODS 6 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 SRCNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 VDSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Dense Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5 Densely Connected Dense Network . . . . . . . . . . . . . . . . . . . . . 9 IV. RESULTS 10 1 Data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2 Evaluation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4 Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5 Discussion for comparison networks . . . . . . . . . . . . . . . . . . . . . 15 6 Discussion for network depth . . . . . . . . . . . . . . . . . . . . . . . . . 18 V. CONCLUSION 20 References 21MASTERdCollectio

    딥러닝 기반 생성 모델을 사용한 조건부 및 모달리티 가이드 의료영상 생성 기법

    No full text
    Conditional Image Generation; Deep Convolutional Neural Networks; Generative Adversarial Networks; Diffusion Models; MRI; Alzheimer’s diseaseI. INTRODUCTION 1 1 Background and Motivation 1 2 Main Contributions 1 3 Thesis Outline 1 II. Paired Image-to-Image Translation for Perivascular Spaces Enhancement 3 1 Introduction 3 1.1 Related Works 4 1.2 Contributions 5 2 Methodology 6 2.1 Densely Connected Deep Neural Network 6 2.2 Implementation Details 8 3 Experiments and Results 8 3.1 Data set 8 3.2 Evaluation Settings 11 3.3 Quantitative Results 11 3.4 Qualitative Results 12 3.5 Discussion for comparison networks 12 3.6 Discussion for network depth 13 4 Discussion 14 III. Multi-domain Image-to-Image Translation for Alzheimer's disease progression 18 1 Introduction 18 1.1 Related Works 20 2 Methodology 22 2.1 Objective function 24 3 Experimental settings 26 4 Results 28 4.1 Quantitative results 28 4.2 Qualitative results 31 4.3 Comparison of Subcortical Structures 33 4.4 Ablation study 33 4.5 Computational efficiency 35 5 Discussion 36 IV. Paired Image-to-Image Translation using Guided Diffusion Model 37 1 Introduction 37 2 Methodology 38 2.1 Training Stage 41 2.2 Inference Stage 41 3 Experiments 42 3.1 Implementation details 42 3.2 Quantitative Results 44 3.3 Qualitative Results 45 4 Discussion 45 V. Multi-domain Image-to-Image Translation using Guided Diffusion Model 51 1 Introduction 51 2 Methodology 51 2.1 Training Stage 53 2.2 Inference Stage 53 3 Experiments 53 3.1 Implementation details 53 3.2 Quantitative Results 55 3.3 Qualitative Results 55 4 Discussion 56 VI. Conclusion and Future Directions 58 1 Conclusion 58 2 Future Directions 58 VII. Acknowledgement 60 References 62DoctordCollectio

    Conditional GAN with 3D discriminator for MRI generation of Alzheimer's disease progression

    No full text
    Many studies aim to predict the degree of deformation on affected brain regions as Alzheimer's disease (AD) progresses. However, those studies have been often limited since it is difficult to obtain sequential longitudinal MR data of affected patients. Recently, conditional generative adversarial networks (cGANs) have been used to estimate the changes between unpaired images by modeling their differences. However, generating high-quality 3D magnetic resonance (MR) brain images with cGANs requires a large amount of computation. Previous models have been mostly designed to operate in 2D space taking individual slices or down-sampled 3D space, but these approaches often cause spatial artifacts such as discontinuities between slices or unnatural changes in 3D space. To address these limitations, we propose a novel cGAN that can synthesize high-quality 3D MR images at different stages of AD by integrating an additional module that ensures smooth and realistic transitions in 3D space. Specifically, the proposed cGAN model consists of an attention-based 2D generator, a 2D discriminator, and a 3D discriminator that is able to synthesize continuous 2D slices along the axial view resulting in good quality 3D MR volumes. Moreover, we propose an adaptive identity loss so that relevant transformations take place without compromising the features to identify patients. In our experiments, the proposed method showed better image generation performance than previously proposed GAN methods in terms of image quality and image generation suitable for the condition. © 2022 Elsevier LtdFALS

    Conditional Generative Adversarial Network for Predicting 3D Medical Images Affected by Alzheimer’s Diseases

    No full text
    Predicting the evolution of Alzheimer’s disease (AD) is important for accurate diagnosis and the development of personalized treatments. However, learning a predictive model is challenging since it is difficult to obtain a large amount of data that includes changes over a long period of time. Conditional Generative Adversarial Networks (cGAN) may be an effective way to generate images that match specific conditions, but they are impractical to generate 3D images due to memory resource limitations. To address this issue, we propose a novel cGAN that is capable of synthesizing MR images at different stages of AD (i.e., normal, mild cognitive impairment, and AD). The proposed method consists of a 2D generator that synthesizes an image according to a condition with the help of 2D and 3D discriminators that evaluate how realistic the synthetic image is. We optimize both the 2D GAN loss and the 3D GAN loss to determine whether multiple consecutive 2D images generated in a mini-batch have real or fake appearance in 3D space. The proposed method can generate smooth and natural 3D images at different conditions by using a single network without large memory requirements. Experimental results show that the proposed method can generate better quality 3D MR images than 2D or 3D cGAN and can also boost the classification performance when the synthesized images are used to train a classification model. © 2020, Springer Nature Switzerland AG

    Low-Dose CT Denoising Using Pseudo-CT Image Pairs

    No full text
    Recently, self-supervised learning methods able to perform image denoising without ground truth labels have been proposed. These methods create low-quality images by adding random or Gaussian noise to images and then train a model for denoising. Ideally, it would be beneficial if one can generate high-quality CT images with only a few training samples via self-supervision. However, the performance of CT denoising is generally limited due to the complexity of CT noise. To address this problem, we propose a novel self-supervised learning-based CT denoising method. In particular, we train pre-train CT denoising and noise models that can predict CT noise from Low-dose CT (LDCT) using available LDCT and Normal-dose CT (NDCT) pairs. For a given test LDCT, we generate Pseudo-LDCT and NDCT pairs using the pre-trained denoising and noise models and then update the parameters of the denoising model using these pairs to remove noise in the test LDCT. To make realistic Pseudo LDCT, we train multiple noise models from individual images and generate the noise using the ensemble of noise models. We evaluate our method on the 2016 AAPM Low-Dose CT Grand Challenge dataset. The proposed ensemble noise models can generate realistic CT noise, and thus our method significantly improves the denoising performance existing denoising models trained by supervised- and self-supervised learning. © 2021, Springer Nature Switzerland AG

    Blending Virtual Reality Laboratories with Cadaver Dissection during COVID-19 Pandemic

    No full text
    EduTech (Education and Technology) has drawn great attention in improving education efficiency for non-face-to-face learning and practice. This paper introduced a blended gross anatomy class using both virtual reality (VR) devices and traditional programs alongside a practice-based cadaver dissection and in-class observation. The class allowed the students to get hands-on experience with both practical practice and VR operations to identify the biochemical aspects of the disease-induced internal organ damage as well as to view the three-dimensional (3D) aspect of human structures that cannot be practiced during the gross anatomy practice. Student surveys indicated an overall positive experience using VR education (satisfaction score over 4 out of 5, Likert scale question). There remains room for improvement, and it was discussed with the results of the essay-based question survey. Formative evaluation results showed that the students who trained in blended anatomy classes with VR set-ups received higher scores (85.28 out of 100, average score) than only cadaver-based anatomy class (79.06 out of 100, average score), and this result represents that the hybrid method could improve the academic efficiency and support the understanding of the 3D structure of the body. At present, VR cannot totally replace actual cadaver dissection practice, but it will play a significant role in the future of medical education if both students and practitioners have more VR devices, practice time, and a more intuitive user-friendly VR program. We believe that our paper will greatly benefit the development of EduTech and a potential new curriculum item for future medical education
    corecore