32 research outputs found

    Adaptive models of Arabic text

    Get PDF

    ์–‘์„ฑ์ž ์ž๊ธฐ๊ณต๋ช…๋ถ„๊ด‘๋ฒ•์„ ์‚ฌ์šฉํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋‘๋‡Œ ๋Œ€์‚ฌ์ฒด ์ •๋Ÿ‰ํ™” ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜๊ณผํ•™๊ณผ, 2022.2. ๊น€ํ˜„์ง„.Nonlinear-least-squares-fitting (NLSF) is widely used in proton magnetic resonance spectroscopy (MRS) for quantification of brain metabolites. However, it is known to subject to variability in the quantitative results depending on the prior knowledge. NLSF-based metabolite quantification is also sensitive to the quality of spectra. In combination with NLSF, Cramer-Rao lower Bounds (CRLB) are used as representing lower bounds of fit errors rather than actual errors. Consequently, a careful interpretation is required to avoid potential statistical bias. The purpose of this study was to develop more robust methods for metabolite quantification and uncertainty estimation in MRS by employing deep learning that has demonstrated its potential in a variety of different tasks including medical imaging. To achieve this goal, first, a convolutional neural network (CNN) was developed. It maps typical brain spectra that are degraded with noise, line-broadening and unknown baseline into noise-free, line-narrowed, baseline-removed spectra. Then, metabolites are quantified from the CNN-predicted spectra by a simple linear regression with more robustness against spectral degradation. Second, a CNN was developed that can isolate each individual metabolite signals from a typical brain spectrum. The CNN output is used not only for quantification but also for calculating signal-to-background-ratio (SBR) for each metabolite. Then, the SBR in combination with big training data are used for estimating measurement uncertainty heuristically. Finally, a Bayesian deep learning approach was employed for theory-oriented uncertainty estimation. In this approach, Monte Carlo dropout is performed for simultaneous estimation of metabolite content and associated uncertainty. These proposed methods were all tested on in vivo data and compared with the conventional approach based on NLSF and CRLB. The methods developed in this study should be tested more thoroughly on a larger amount of in vivo data. Nonetheless, the current results suggest that they may facilitate the applicability of MRS.๋‘๋‡Œ ๋‚ด ํŠน์ •ํ•œ ๋ถ€์œ„์— ๋Œ€ํ•œ ๋Œ€์‚ฌ์ฒด๋“ค์˜ ์ข…๋ฅ˜์™€ ๋†๋„ ์ •๋ณด๋ฅผ ํš๋“ํ•  ์ˆ˜ ์žˆ๋Š” ์ž๊ธฐ๊ณต๋ช…๋ถ„๊ด‘ (MRS) ๋ถ„์•ผ์—์„œ ์ผ๋ฐ˜์ ์œผ๋กœ ํ™œ์šฉํ•˜๊ณ  ์žˆ๋Š” ๋น„์„ ํ˜• ์ตœ์†Œ์ œ๊ณฑํ”ผํŒ… (Nonlinear least squares fitting; NSLF)์€ ์ฃผ์–ด์ง„ ์‚ฌ์ „ ์ •๋ณด (Prior knowledge)์— ์˜์กดํ•œ ์ •๋Ÿ‰ํ™” ๊ฒฐ๊ณผ ๋ณ€๋™ ํŠน์„ฑ์„ ๋‚˜ํƒ€๋‚ธ๋‹ค. NLSF ๊ธฐ๋ฐ˜ํ•œ ๋‘๋‡Œ ๋Œ€์‚ฌ์ฒด ์ •๋Ÿ‰ํ™”๋Š” MRS ์‹ ํ˜ธํ’ˆ์งˆ์— ๋ฏผ๊ฐํ•˜๊ฒŒ ์„ฑ๋Šฅ ๋ณ€ํ™”๋ฅผ ๋‚˜ํƒ€๋‚ธ๋‹ค. ๋ฌด์—‡ ๋ณด๋‹ค, NLSF๋ฅผ ํ†ตํ•œ ์ •๋Ÿ‰ํ™” ๊ฒฐ๊ณผ์˜ ์‹ ๋ขฐ ์ง€ํ‘œ์ธ ํฌ๋ผ๋ฉ”๋ฅด-๋ผ์˜ค ํ•˜ํ•œ (Cramer-Rao lower Bound; CRLB)์€ ์ •๋Ÿ‰ํ™” ๊ฒฐ๊ณผ์— ๋Œ€ํ•œ ์˜ค์ฐจ์ •๋ณด๋ฅผ ๋ฐ˜์˜ํ•˜๋Š” ์ •ํ™•๋„๊ฐ€ ์•„๋‹Œ, ์ •๋ฐ€๋„๋ฅผ ํ‘œํ˜„ํ•˜๋ฏ€๋กœ, ์ด๋ฅผ ์ฃผ์˜ํ•˜์—ฌ ํ™œ์šฉํ•˜์ง€ ์•Š์œผ๋ฉด ํ†ต๊ณ„์  ํŽธํ–ฅ์„ฑ์„ ๋‚˜ํƒ€๋‚ผ ์œ„ํ—˜์ด ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค๋กœ ์ธํ•ด MRS๋Š” ํ˜„์žฌ๊นŒ์ง€๋„ ์ œํ•œ์ ์œผ๋กœ๋งŒ ์ž„์ƒ ํ™œ์šฉ๋˜๊ณ  ์žˆ๋Š” ์ƒํ™ฉ์ด๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ๋Š” ์ž๊ธฐ๊ณต๋ช…๋ถ„๊ด‘๋ฒ•์„ ์ด์šฉํ•œ ๋‘๋‡Œ ๋Œ€์‚ฌ์ฒด ์ •๋Ÿ‰ํ™” ๊ณผ์ •์— ์žˆ์–ด์„œ ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์„ ์ ‘๋ชฉํ•˜์—ฌ, ์ •๋Ÿ‰ํ™” ์ •ํ™•๋„๋ฅผ ๊ฐœ์„ ํ•˜๋Š” ์ ์— ์ฃผ ๋ชฉ์ ์„ ๋‘๊ณ  ์žˆ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‘ ๋ถ€๋ถ„์— ๋Œ€ํ•œ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ฒซ๋ฒˆ์งธ๋กœ๋Š” ๊นŠ์€ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ํ†ตํ•ด MRS ์‹ ํ˜ธ๋‚ด์˜ ๋‘๋‡Œ ๋Œ€์‚ฌ์ฒด ๊ณต๋ช… ์‹ ํ˜ธ๋งŒ์„ ์ถ”์ถœํ•˜์—ฌ, ์ด๋ฅผ ๊ฐ„๋‹จํ•œ ์„ ํ˜• ํšŒ๊ท€ ํ›„์ฒ˜๋ฆฌ๋ฅผ ํ†ตํ•ด ์ •๋Ÿ‰ํ™”๋ฅผ ํ•  ์ˆ˜ ์žˆ๋Š” ๋ถ„์„ ๊ธฐ์ˆ ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋‘๋ฒˆ์งธ๋กœ๋Š” ๋”ฅ ๋Ÿฌ๋‹์—์„œ ์˜ˆ์ธกํ•˜๋Š” ๊ฒฐ๊ณผ๋“ค์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ ์ง€ํ‘œ๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ๋Š” ๋น…๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜์˜ ๊ฒฝํ—˜์  ๋ถˆํ™•์‹ค์„ฑ ์ง€ํ‘œ์™€, ๋ฒ ์ด์ง€์•ˆ ์ ‘๊ทผ๋ฒ•์— ๊ธฐ๋ฐ˜ํ•œ ์ •๊ทœ๋ถ„ํฌ๋ฅผ ๋”ฐ๋ฅด๋Š” ๋ถˆํ™•์‹ค์„ฑ ์ง€ํ‘œ ํ‘œํ˜„ ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค์€ NLSF ๋Œ€๋น„ MRS ์‹ ํ˜ธ ํ’ˆ์งˆ์— ๋œ ์˜ํ–ฅ์„ ๋ฐ›์œผ๋ฉด์„œ ๋‚ฎ์€ ์ •๋Ÿ‰ํ™” ๊ฒฐ๊ณผ ๋ณ€๋™์„ฑ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋™์‹œ์—, NLSF์˜ ์ •๋Ÿ‰ํ™” ๊ฒฐ๊ณผ์— ๋Œ€ํ•œ ์‹ ๋ขฐ์ง€ํ‘œ์ธ CRLB์— ๋น„ํ•ด ๋” ์‹ค์ œ ์˜ค์ฐจ์™€ ์ƒ๊ด€์„ฑ์ด ๋†’์€ ๋ถˆํ™•์‹ค์„ฑ ์ง€ํ‘œ ์„ฑ๋Šฅ์„ ๋ณด์˜€๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ๋Š”, MRS๋ฅผ ํ™œ์šฉํ•œ ๋‘๋‡Œ ๋Œ€์‚ฌ์ฒด ์ •๋Ÿ‰ํ™”์— ๋Œ€ํ•œ ์ •ํ™•๋„ ๊ฐœ์„ ์„ ์œ„ํ•ด ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ์ˆ ๋“ค์„ ํ™œ์šฉํ•œ๋‹ค๋ฉด, MRS์˜ ์ž„์ƒ ์ ์šฉ ๊ฐ€๋Šฅ์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ์Œ์„ ์‹œ์‚ฌํ•œ๋‹ค.Chapter 1. Introduction 1 1.1. Magnetic Resonance Spectroscopy 1 1.1.1. Nuclear Spin 1 1.1.2. Magnetization 4 1.1.3. MRS Signal 6 1.1.4. Chemical Shift 12 1.1.5. Indirect Spin-Spin Coupling 14 1.1.6. in vivo Metabolites 15 1.1.7. RF Pulses and Gradients 17 1.1.8. Water Suppression 19 1.1.9. Spatial Localized Methods in Single Voxel MRS 20 1.1.10. Metabolite Quantification 22 1.2. Deep Learning 24 1.2.1. Training for Regression Model 25 1.2.2. Training for Classification Model 27 1.2.3. Multilayer Perceptron 29 1.2.4. Model Evaluation and Selection 32 1.2.5. Training Stability and Initialization 35 1.2.6. Convolutional Neural Networks 36 1.3. Perpose of the Research 38 1.4. Preparation of MRS Spectra and Their Usage 40 Chapter 2. Intact metabolite spectrum mining by deep learning in proton magnetic resonance spectroscopy of the brain 45 2.1. Introduction 45 2.2. Methods and Materials 46 2.2.1. Acquisition of in vivo Spectra 46 2.2.2. Acquisition of Metabolite Phantom Spectra 47 2.2.3. Simulation of Brain Spectra 47 2.2.4. Design and Optimization of CNN 52 2.2.5. Evaluation of the Reproducibility of the Optimized CNN 52 2.2.6. Metabolite Quantification from the Predicted Spectra 53 2.2.7. Evaluation of CNN in Metabolite Quantification 53 2.2.8. Statistical Analysis 54 2.3. Results 54 2.3.1. SNR Distribution of the Simulated Spectra 54 2.3.2. Optimized CNN 56 2.3.3. Representative Simulated and CNN-predicted Spectra 56 2.3.4. Metabolite Quantification in Simulated Spectra 57 2.3.5. Representative in vivo and CNN-predicted Spectra 61 2.3.6. Metabolite Quantification in in vivo Spectra 64 2.4. Discussions 67 2.4.1. Motivation of Study 67 2.4.2. Metabolite Quantification on Simulated and in vivo Brain Spectra 68 2.4.3. Metabolite Quantification Robustness against Low SNR 69 2.4.4. Study Limitation 70 Chapter 3. Deep learning-based target metabolite isolation and big data-driven measurement uncertainty estimation in proton magnetic resonance spectroscopy of the brain 79 3.1. Introduction 79 3.2. Methods and Materials 80 3.2.1. Acquisition and Analysis of in vivo Rat Brain Spectra 80 3.2.2. Simulation of Metabolite Basis set 81 3.2.3. Acquisition of Metabolite Basis set in Phantom 81 3.2.4. Simulation of Rat Brain Spectra using Simulated Metabolite and Baseline Basis Sets 82 3.2.5. Simulation of Rat Brain Spectra using Metabolite Phantom Spectra and in vivo Baseline 87 3.2.6. Design and Optimization of CNN 87 3.2.7. Metabolite Quantification from the CNN-predicted Spectra 90 3.2.8. Prediction of Quantitative Error 90 3.2.9. Evaluation of Proposed Method 93 3.2.10. Statistical Analysis 93 3.3. Results 94 3.3.1. Performance of Proposed Method on Simulated Spectra Set I 94 3.3.2. Performance of Proposed Method, LCModel, and jMRUI on Simulated Spectra Set II 99 3.3.3. Proposed Method Applied to in vivo Spectra 105 3.3.4. Processing Time 105 3.4. Discussions 109 3.4.1. Summary of the Study 109 3.4.2. Performance of Proposed Method on Simulated Spectra 110 3.4.3. Proposed Method Applied to in vivo Spectra 111 3.4.4. Robustness of CNNs against Different SNR 111 3.4.5. CRLB and Predicted Error 112 3.4.6. Study Limitation 113 Chapter 4. Bayesian deep learning-based proton magnetic resonance spectroscopy of the brain: metabolite quantification with uncertainty estimation using Monte Carlo dropout 118 4.1. Introduction 118 4.2. Methods and Materials 119 4.2.1. Theory 119 4.2.2. Preparation of Spectra 124 4.2.3. BCNN 125 4.2.4. Evaluation of Proposed Method 126 4.2.5. Statistical Analysis 127 4.3. Results 127 4.3.1. Metabolite Content and Uncertainty Estimation on the Simulated Spectra 127 4.3.2. BCNN and LCModel on Modified in vivo Spectra 136 4.4. Discussions 144 4.4.1. Motivation of Study 144 4.4.2. Metabolite Quantification on Simulated Brain Spectra 144 4.4.3. Uncertainty Estimation on Simulated Brain Spectra 145 4.4.4. Aleatoric, Epistemic and Total Uncertainty as a Function of SNR, Linewidth or Concentration of NAA 147 4.4.5. Robustness of BCNN against SNR and Linewidth Tested on Modified in vivo Spectra 148 4.4.6. Study Limitation 148 Chapter 5. Conclusion 160 5.1. Research Summary 160 5.2. Future Works 160 Bibliography 163 Abstract in Korean 173๋ฐ•

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen Fรคllen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die Grรผnde fรผr diesen Erfolg noch nicht vollstรคndig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen Verstรคndnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen fรผr die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec fรผr hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die Echtzeitfรคhigkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermรถglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die Konkurrenzfรคhigkeit dieses Verfahrens auf

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen Fรคllen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die Grรผnde fรผr diesen Erfolg noch nicht vollstรคndig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen Verstรคndnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen fรผr die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec fรผr hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die Echtzeitfรคhigkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermรถglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die Konkurrenzfรคhigkeit dieses Verfahrens auf

    Algorithms for integrated analysis of glycomics and glycoproteomics by LC-MS/MS

    Get PDF
    The glycoproteome is an intricate and diverse component of a cell, and it plays a key role in the definition of the interface between that cell and the rest of its world. Methods for studying the glycoproteome have been developed for released glycan glycomics and site-localized bottom-up glycoproteomics using liquid chromatography-coupled mass spectrometry and tandem mass spectrometry (LC-MS/MS), which is itself a complex problem. Algorithms for interpreting these data are necessary to be able to extract biologically meaningful information in a high throughput, automated context. Several existing solutions have been proposed but may be found lacking for larger glycopeptides, for complex samples, different experimental conditions, different instrument vendors, or even because they simply ignore fundamentals of glycobiology. I present a series of open algorithms that approach the problem from an instrument vendor neutral, cross-platform fashion to address these challenges, and integrate key concepts from the underlying biochemical context into the interpretation process. In this work, I created a suite of deisotoping and charge state deconvolution algorithms for processing raw mass spectra at an LC scale from a variety of instrument types. These tools performed better than previously published algorithms by enforcing the underlying chemical model more strictly, while maintaining a higher degree of signal fidelity. From this summarized, vendor-normalized data, I composed a set of algorithms for interpreting glycan profiling experiments that can be used to quantify glycan expression. From this I constructed a graphical method to model the active biosynthetic pathways of the sample glycome and dig deeper into those signals than would be possible from the raw data alone. Lastly, I created a glycopeptide database search engine from these components which is capable of identifying the widest array of glycosylation types available, and demonstrate a learning algorithm which can be used to tune the model to better understand the process of glycopeptide fragmentation under specific experimental conditions to outperform a simpler model by between 10% and 15%. This approach can be further augmented with sample-wide or site-specific glycome models to increase depth-of-coverage for glycoforms consistent with prior beliefs

    WOFEX 2021 : 19th annual workshop, Ostrava, 1th September 2021 : proceedings of papers

    Get PDF
    The workshop WOFEX 2021 (PhD workshop of Faculty of Electrical Engineer-ing and Computer Science) was held on September 1st September 2021 at the VSB โ€“ Technical University of Ostrava. The workshop offers an opportunity for students to meet and share their research experiences, to discover commonalities in research and studentship, and to foster a collaborative environment for joint problem solving. PhD students are encouraged to attend in order to ensure a broad, unconfined discussion. In that view, this workshop is intended for students and researchers of this faculty offering opportunities to meet new colleagues.Ostrav

    Universal text preprocessing and postprocessing for PPM using Alphabet Adjustment

    No full text
    In this paper, we introduce several new universal pre-processing techniques to improve Prediction by Partial Matching (PPM) compression of UTF-8 encoded natural language text. These methods essentially 'adjust' the alphabet in some manner (for example, by expanding or reducing it) prior to the compression algorithm then being applied to the amended text

    Advanced Location-Based Technologies and Services

    Get PDF
    Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in e๏ฌƒcient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies e๏ฌ€ectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an e๏ฌ€ective compression method. Since medical information is critical and imposes an in๏ฌ‚uential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable e๏ฌ€orts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information e๏ฌƒciently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the di๏ฌ€erences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the e๏ฌ€ectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) modelsโ€™ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxelโ€™s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input con๏ฌgurations and sampling schemes for a many-to-one sequence prediction model, speci๏ฌcally for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by di๏ฌ€erent hospitals, representing di๏ฌ€erent body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are signi๏ฌcantly more informative than others, speci๏ฌcally in medical domains where samples are available on a scale of billions. The e๏ฌ€ectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore