1,053 research outputs found

    Deep Learning with Limited Labels for Medical Imaging

    Get PDF
    Recent advancements in deep learning-based AI technologies provide an automatic tool to revolutionise medical image computing. Training a deep learning model requires a large amount of labelled data. Acquiring labels for medical images is extremely challenging due to the high cost in terms of both money and time, especially for the pixel-wise segmentation task of volumetric medical scans. However, obtaining unlabelled medical scans is relatively easier compared to acquiring labels for those images. This work addresses the pervasive issue of limited labels in training deep learning models for medical imaging. It begins by exploring different strategies of entropy regularisation in the joint training of labelled and unlabelled data to reduce the time and cost associated with manual labelling for medical image segmentation. Of particular interest are consistency regularisation and pseudo labelling. Specifically, this work proposes a well-calibrated semi-supervised segmentation framework that utilises consistency regularisation on different morphological feature perturbations, representing a significant step towards safer AI in medical imaging. Furthermore, it reformulates pseudo labelling in semi-supervised learning as an Expectation-Maximisation framework. Building upon this new formulation, the work explains the empirical successes of pseudo labelling and introduces a generalisation of the technique, accompanied by variational inference to learn its true posterior distribution. The applications of pseudo labelling in segmentation tasks are also presented. Lastly, this work explores unsupervised deep learning for parameter estimation of diffusion MRI signals, employing a hierarchical variational clustering framework and representation learning

    How good is good enough? Strategies for dealing with unreliable segmentation annotations of medical data

    Get PDF
    Medical image segmentation is an essential topic in computer vision and medical image analysis, because it enables the precise and accurate segmentation of organs and lesions for healthcare applications. Deep learning has dominated in medical image segmentation due to increasingly powerful computational resources, successful neural network architecture engineering, and access to large amounts of medical imaging data with high-quality annotations. However, annotating medical imaging data is time-consuming and expensive, and sometimes the annotations are unreliable. This DPhil thesis presents a comprehensive study that explores deep learning techniques in medical image segmentation under various challenging situations of unreliable medical imaging data. These situations include: (1) conventional supervised learning to tackle comprehensive data annotation with full dense masks, (2) semi-supervised learning to tackle partial data annotation with full dense masks, (3) noise-robust learning to tackle comprehensive data annotation with noisy dense masks, and (4) weakly-supervised learning to tackle comprehensive data annotation with sketchy contours for network training. The proposed medical image segmentation strategies improve deep learning techniques to effectively address a series of challenges in medical image analysis, including limited annotated data, noisy annotations, and sparse annotations. These advancements aim to bring deep learning techniques of medical image analysis into practical clinical scenarios. By overcoming these challenges, the strategies establish a more robust and reliable application of deep learning methods which is valuable for improving diagnostic precision and patient care outcomes in real-world clinical environments

    ๋”ฅ๋Ÿฌ๋‹ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•œ ๋†’์€ ์ ์šฉ์„ฑ์„ ๊ฐ€์ง„ ์ˆ˜๊ฒฝ์žฌ๋ฐฐ ํŒŒํ”„๋ฆฌ์นด ๋Œ€์ƒ ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ๋†๋ฆผ์ƒ๋ฌผ์ž์›ํ•™๋ถ€, 2022. 8. ์†์ •์ต.Many agricultural challenges are entangled in a complex interaction between crops and the environment. As a simplifying tool, crop modeling is a process of abstracting and interpreting agricultural phenomena. Understanding based on this interpretation can play a role in supporting academic and social decisions in agriculture. Process-based crop models have solved the challenges for decades to enhance the productivity and quality of crop production; the remaining objectives have led to demand for crop models handling multidirectional analyses with multidimensional information. As a possible milestone to satisfy this goal, deep learning algorithms have been introduced to the complicated tasks in agriculture. However, the algorithms could not replace existing crop models because of the research fragmentation and low accessibility of the crop models. This study established a developmental protocol for a process-based crop model with deep learning methodology. Literature Review introduced deep learning and crop modeling, and it explained the reasons for the necessity of this protocol despite numerous deep learning applications for agriculture. Base studies were conducted with several greenhouse data in Chapters 1 and 2: transfer learning and U-Net structure were utilized to construct an infrastructure for the deep learning application; HyperOpt, a Bayesian optimization method, was tested to calibrate crop models to compare the existing crop models with the developed model. Finally, the process-based crop model with full deep neural networks, DeepCrop, was developed with an attention mechanism and multitask decoders for hydroponic sweet peppers (Capsicum annuum var. annuum) in Chapter 3. The methodology for data integrity showed adequate accuracy, so it was applied to the data in all chapters. HyperOpt was able to calibrate food and feed crop models for sweet peppers. Therefore, the compared models in the final chapter were optimized using HyperOpt. DeepCrop was trained to simulate several growth factors with environment data. The trained DeepCrop was evaluated with unseen data, and it showed the highest modeling efficiency (=0.76) and the lowest normalized root mean squared error (=0.18) than the compared models. With the high adaptability of DeepCrop, it can be used for studies on various scales and purposes. Since all methods adequately solved the given tasks and underlay the DeepCrop development, the established protocol can be a high throughput for enhancing accessibility of crop models, resulting in unifying crop modeling studies.๋†์—… ์‹œ์Šคํ…œ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ๋“ค์€ ์ž‘๋ฌผ๊ณผ ํ™˜๊ฒฝ์˜ ์ƒํ˜ธ์ž‘์šฉ ํ•˜์— ๋ณต์žกํ•˜๊ฒŒ ์–ฝํ˜€ ์žˆ๋‹ค. ์ž‘๋ฌผ ๋ชจ๋ธ๋ง์€ ๋Œ€์ƒ์„ ๋‹จ์ˆœํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ์จ, ๋†์—…์—์„œ ์ผ์–ด๋‚˜๋Š” ํ˜„์ƒ์„ ์ถ”์ƒํ™”ํ•˜๊ณ  ํ•ด์„ํ•˜๋Š” ๊ณผ์ •์ด๋‹ค. ๋ชจ๋ธ๋ง์„ ํ†ตํ•ด ๋Œ€์ƒ์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์€ ๋†์—… ๋ถ„์•ผ์˜ ํ•™์ˆ ์  ๋ฐ ์‚ฌํšŒ์  ๊ฒฐ์ •์„ ์ง€์›ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ง€๋‚œ ์ˆ˜๋…„ ๊ฐ„ ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ์ž‘๋ฌผ ๋ชจ๋ธ์€ ๋†์—…์˜ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•˜์—ฌ ์ž‘๋ฌผ ์ƒ์‚ฐ์„ฑ ๋ฐ ํ’ˆ์งˆ์„ ์ฆ์ง„์‹œ์ผฐ์œผ๋ฉฐ, ํ˜„์žฌ ์ž‘๋ฌผ ๋ชจ๋ธ๋ง์— ๋‚จ์•„์žˆ๋Š” ๊ณผ์ œ๋“ค์€ ๋‹ค์ฐจ์› ์ •๋ณด๋ฅผ ๋‹ค๋ฐฉํ–ฅ์—์„œ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘๋ฌผ ๋ชจ๋ธ์„ ํ•„์š”๋กœ ํ•˜๊ฒŒ ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ๋งŒ์กฑ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ์ง€์นจ์œผ๋กœ์จ, ๋ณต์žกํ•œ ๋†์—…์  ๊ณผ์ œ๋“ค์„ ๋ชฉํ‘œ๋กœ ๋”ฅ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ๋„์ž…๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์€ ๋‚ฎ์€ ๋ฐ์ดํ„ฐ ์™„๊ฒฐ์„ฑ ๋ฐ ๋†’์€ ์—ฐ๊ตฌ ๋‹ค์–‘์„ฑ ๋•Œ๋ฌธ์— ๊ธฐ์กด์˜ ์ž‘๋ฌผ ๋ชจ๋ธ๋“ค์„ ๋Œ€์ฒดํ•˜์ง€๋Š” ๋ชปํ–ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•˜์—ฌ ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ์ž‘๋ฌผ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ฐœ๋ฐœ ํ”„๋กœํ† ์ฝœ์„ ํ™•๋ฆฝํ•˜์˜€๋‹ค. Literature Review์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹๊ณผ ์ž‘๋ฌผ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•˜๊ณ , ๋†์—…์œผ๋กœ์˜ ๋”ฅ๋Ÿฌ๋‹ ์ ์šฉ ์—ฐ๊ตฌ๊ฐ€ ๋งŽ์Œ์—๋„ ์ด ํ”„๋กœํ† ์ฝœ์ด ํ•„์š”ํ•œ ์ด์œ ๋ฅผ ์„ค๋ช…ํ•˜์˜€๋‹ค. ์ œ1์žฅ๊ณผ 2์žฅ์—์„œ๋Š” ๊ตญ๋‚ด ์—ฌ๋Ÿฌ ์ง€์—ญ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ „์ด ํ•™์Šต ๋ฐ U-Net ๊ตฌ์กฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ์ ์šฉ์„ ์œ„ํ•œ ๊ธฐ๋ฐ˜์„ ๋งˆ๋ จํ•˜๊ณ , ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•์ธ HyperOpt๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ์กด ๋ชจ๋ธ๊ณผ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ๋น„๊ตํ•˜๊ธฐ ์œ„ํ•ด ์‹œํ—˜์ ์œผ๋กœ WOFOST ์ž‘๋ฌผ ๋ชจ๋ธ์„ ๋ณด์ •ํ•˜๋Š” ๋“ฑ ๋ชจ๋ธ ๊ฐœ๋ฐœ์„ ์œ„ํ•œ ๊ธฐ๋ฐ˜ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ œ3์žฅ์—์„œ๋Š” ์ฃผ์˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜ ๋ฐ ๋‹ค์ค‘ ์ž‘์—… ๋””์ฝ”๋”๋ฅผ ๊ฐ€์ง„ ์™„์ „ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ์ž‘๋ฌผ ๋ชจ๋ธ์ธ DeepCrop์„ ์ˆ˜๊ฒฝ์žฌ๋ฐฐ ํŒŒํ”„๋ฆฌ์นด(Capsicum annuum var. annuum) ๋Œ€์ƒ์œผ๋กœ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋ฐ์ดํ„ฐ ์™„๊ฒฐ์„ฑ์„ ์œ„ํ•œ ๊ธฐ์ˆ ๋“ค์€ ์ ํ•ฉํ•œ ์ •ํ™•๋„๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ์œผ๋ฉฐ, ์ „์ฒด ์ฑ•ํ„ฐ ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜์˜€๋‹ค. HyperOpt๋Š” ์‹๋Ÿ‰ ๋ฐ ์‚ฌ๋ฃŒ ์ž‘๋ฌผ ๋ชจ๋ธ๋“ค์„ ํŒŒํ”„๋ฆฌ์นด ๋Œ€์ƒ์œผ๋กœ ๋ณด์ •ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋”ฐ๋ผ์„œ, ์ œ3์žฅ์˜ ๋น„๊ต ๋Œ€์ƒ ๋ชจ๋ธ๋“ค์— ๋Œ€ํ•ด HyperOpt๋ฅผ ์‚ฌ์šฉํ•˜์˜€๋‹ค. DeepCrop์€ ํ™˜๊ฒฝ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜๊ณ  ์—ฌ๋Ÿฌ ์ƒ์œก ์ง€ํ‘œ๋ฅผ ์˜ˆ์ธกํ•˜๋„๋ก ํ•™์Šต๋˜์—ˆ๋‹ค. ํ•™์Šต์— ์‚ฌ์šฉํ•˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต๋œ DeepCrop๋ฅผ ํ‰๊ฐ€ํ•˜์˜€์œผ๋ฉฐ, ์ด ๋•Œ ๋น„๊ต ๋ชจ๋ธ๋“ค ์ค‘ ๊ฐ€์žฅ ๋†’์€ ๋ชจํ˜• ํšจ์œจ(EF=0.76)๊ณผ ๊ฐ€์žฅ ๋‚ฎ์€ ํ‘œ์ค€ํ™” ํ‰๊ท  ์ œ๊ณฑ๊ทผ ์˜ค์ฐจ(NRMSE=0.18)๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. DeepCrop์€ ๋†’์€ ์ ์šฉ์„ฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์–‘ํ•œ ๋ฒ”์œ„์™€ ๋ชฉ์ ์„ ๊ฐ€์ง„ ์—ฐ๊ตฌ์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค. ๋ชจ๋“  ๋ฐฉ๋ฒ•๋“ค์ด ์ฃผ์–ด์ง„ ์ž‘์—…์„ ์ ์ ˆํžˆ ํ’€์–ด๋ƒˆ๊ณ  DeepCrop ๊ฐœ๋ฐœ์˜ ๊ทผ๊ฑฐ๊ฐ€ ๋˜์—ˆ์œผ๋ฏ€๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ ํ™•๋ฆฝํ•œ ํ”„๋กœํ† ์ฝœ์€ ์ž‘๋ฌผ ๋ชจ๋ธ์˜ ์ ‘๊ทผ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ํš๊ธฐ์ ์ธ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๊ณ , ์ž‘๋ฌผ ๋ชจ๋ธ ์—ฐ๊ตฌ์˜ ํ†ตํ•ฉ์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค.LITERATURE REVIEW 1 ABSTRACT 1 BACKGROUND 3 REMARKABLE APPLICABILITY AND ACCESSIBILITY OF DEEP LEARNING 12 DEEP LEARNING APPLICATIONS FOR CROP PRODUCTION 17 THRESHOLDS TO APPLY DEEP LEARNING TO CROP MODELS 18 NECESSITY TO PRIORITIZE DEEP-LEARNING-BASED CROP MODELS 20 REQUIREMENTS OF THE DEEP-LEARNING-BASED CROP MODELS 21 OPENING REMARKS AND THESIS OBJECTIVES 22 LITERATURE CITED 23 Chapter 1 34 Chapter 1-1 35 ABSTRACT 35 INTRODUCTION 37 MATERIALS AND METHODS 40 RESULTS 50 DISCUSSION 59 CONCLUSION 63 LITERATURE CITED 64 Chapter 1-2 71 ABSTRACT 71 INTRODUCTION 73 MATERIALS AND METHODS 75 RESULTS 84 DISCUSSION 92 CONCLUSION 101 LITERATURE CITED 102 Chapter 2 108 ABSTRACT 108 NOMENCLATURE 110 INTRODUCTION 112 MATERIALS AND METHODS 115 RESULTS 124 DISCUSSION 133 CONCLUSION 137 LITERATURE CITED 138 Chapter 3 144 ABSTRACT 144 INTRODUCTION 146 MATERIALS AND METHODS 149 RESULTS 169 DISCUSSION 182 CONCLUSION 187 LITERATURE CITED 188 GENERAL DISCUSSION 196 GENERAL CONCLUSION 201 ABSTRACT IN KOREAN 203 APPENDIX 204๋ฐ•

    Medical Image Segmentation with Deep Convolutional Neural Networks

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images are time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images has been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and sparked research interests in medical image segmentation using deep learning. We propose three convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    DECONET: an Unfolding Network for Analysis-based Compressed Sensing with Generalization Error Bounds

    Full text link
    We present a new deep unfolding network for analysis-sparsity-based Compressed Sensing. The proposed network coined Decoding Network (DECONET) jointly learns a decoder that reconstructs vectors from their incomplete, noisy measurements and a redundant sparsifying analysis operator, which is shared across the layers of DECONET. Moreover, we formulate the hypothesis class of DECONET and estimate its associated Rademacher complexity. Then, we use this estimate to deliver meaningful upper bounds for the generalization error of DECONET. Finally, the validity of our theoretical results is assessed and comparisons to state-of-the-art unfolding networks are made, on both synthetic and real-world datasets. Experimental results indicate that our proposed network outperforms the baselines, consistently for all datasets, and its behaviour complies with our theoretical findings.Comment: Accepted in IEEE Transactions on Signal Processin

    Distributed Video Coding: Iterative Improvements

    Get PDF

    Exploring probabilistic models for semi-supervised learning

    Get PDF
    Deep neural networks are increasingly harnessed for computer vision tasks, thanks to their robust performance. However, their training demands large-scale labeled datasets, which are labor-intensive to prepare. Semi-supervised learning (SSL) offers a solution by learning from a mix of labeled and unlabeled data. While most state-of-the-art SSL methods follow a deterministic approach, the exploration of their probabilistic counterparts remains limited. This research area is important because probabilistic models can provide uncertainty estimates critical for real-world applications. For instance, SSL-trained models may fall short of those trained with supervised learning due to potential pseudo-label errors in unlabeled data, and these models are more likely to make wrong predictions in practice. Especially in critical sectors like medical image analysis and autonomous driving, decision-makers must understand the modelโ€™s limitations and when incorrect predictions may occur, insights often provided by uncertainty estimates. Furthermore, uncertainty can also serve as a criterion for filtering out unreliable pseudo-labels when unlabeled samples are used for training, potentially improving deep model performance. This thesis furthers the exploration of probabilistic models for SSL. Drawing on the widely-used Bayesian approximation tool, Monte Carlo (MC) dropout, I propose a new probabilistic framework, the Generative Bayesian Deep Learning (GBDL) architecture, for semi-supervised medical image segmentation. This approach not only mitigates potential overfitting found in previous methods but also achieves superior results across four evaluation metrics. Unlike its empirically designed predecessors, GBDL is underpinned by a full Bayesian formulation, providing a theoretical probabilistic foundation. Acknowledging MC dropoutโ€™s limitations, I introduce NP-Match, a novel proba- bilistic approach for large-scale semi-supervised image classification. We evaluated NP-Matchโ€™s generalization capabilities through extensive experiments in different challenging settings such as standard, imbalanced, and multi-label semi-supervised image classification. According to the experimental results, NP-Match not only competes favorably with previous state-of-the-art methods but also estimates uncertainty more rapidly than MC-dropout-based models, thus enhancing both training and testing efficiency. Lastly, I propose NP-SemiSeg, a new probabilistic model for semi-supervised se- mantic segmentation. This flexible model can be integrated with various existing segmentation frameworks to make predictions and estimate uncertainty. Experiments indicate that NP-SemiSeg surpasses MC dropout in accuracy, uncertainty quantification, and speed

    The roles of online and offline replay in planning

    Get PDF
    Animals and humans replay neural patterns encoding trajectories through their environment, both whilst they solve decision-making tasks and during rest. Both on-task and off-task replay are believed to contribute to flexible decision making, though how their relative contributions differ remains unclear. We investigated this question by using magnetoencephalography (MEG) to study human subjects while they performed a decision-making task that was designed to reveal the decision algorithms employed. We characterised subjects in terms of how flexibly each adjusted their choices to changes in temporal, spatial and reward structure. The more flexible a subject, the more they replayed trajectories during task performance, and this replay was coupled with re-planning of the encoded trajectories. The less flexible a subject, the more they replayed previously preferred trajectories during rest periods between task epochs. The data suggest that online and offline replay both participate in planning but support distinct decision strategies
    • โ€ฆ
    corecore