789 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Segmentation of skin lesions in 2D and 3D ultrasound images using a spatially coherent generalized Rayleigh mixture model

    Get PDF
    This paper addresses the problem of jointly estimating the statistical distribution and segmenting lesions in multiple-tissue high-frequency skin ultrasound images. The distribution of multiple-tissue images is modeled as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by enforcing local dependence between the mixture components. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. More precisely, a hybrid Metropolis-within-Gibbs sampler is used to draw samples that are asymptotically distributed according to the posterior distribution of the Bayesian model. The Bayesian estimators of the model parameters are then computed from the generated samples. Simulation results are conducted on synthetic data to illustrate the performance of the proposed estimation strategy. The method is then successfully applied to the segmentation of in vivo skin tumors in high-frequency 2-D and 3-D ultrasound images

    Data-driven Channel Learning for Next-generation Communication Systems

    Get PDF
    University of Minnesota Ph.D. dissertation. October 2019. Major: Electrical/Computer Engineering. Advisor: Georgios Giannakis. 1 computer file (PDF); x, 116 pages.The turn of the decade has trademarked the `global society' as an information society, where the creation, distribution, integration, and manipulation of information have significant political, economic, technological, academic, and cultural implications. Its main drivers are digital information and communication technologies, which have resulted in a "data deluge", as the number of smart and Internet-capable devices increases rapidly. Unfortunately, establishing information infrastructure to collect data becomes more challenging particularly as communication networks for those devices become larger, denser, and more heterogeneous to meet the quality-of-service (QoS) for the users. Furthermore, scarcity in spectral resources due to an increased demand for mobile devices urges the development of a new methodology for wireless communications possibly facing unprecedented constraints both on hardware and software. At the same time, recent advances in machine learning tools enable statistical inference with efficiency as well as scalability in par with the volume and dimensionality of the data. These considerations justify the pressing need for machine learning tools that are amenable to new hardware and software constraints, and can scale with the size of networks, to facilitate the advanced operation of next-generation communication systems. The present thesis is centered on analytical and algorithmic foundations enabling statistical inference of critical information under practical hardware/software constraints to design and operate wireless communication networks. The vision is to establish a unified and comprehensive framework based on state-of-the-art data-driven learning and Bayesian inference tools to learn the channel-state information that is accurate yet efficient and non-demanding in terms of resources. The central goal is to theoretically, algorithmically, and experimentally demonstrate how valuable insights from data-driven learning can lead to solutions that markedly advance the state-of-the-art performance on inference of channel-state information. To this end, the present thesis investigates two main research thrusts: i) channel-gain cartography leveraging low-rank and sparsity; and ii) Bayesian approaches to channel-gain cartography for spatially heterogeneous environment. The aforementioned research thrusts introduce novel algorithms that aim to tackle the issues of next-generation communication networks. Potential of the proposed algorithms is showcased by rigorous theoretical results and extensive numerical tests

    Bayesian modelling of organ deformations in radiotherapy

    Get PDF
    Moderne strÄlebehandling mot kreft er skreddarsydd for Ä gje ein hÞg strÄledose tilpassa svulsten (mÄlvolumet), mens sÄ lite dose som mogleg vert gitt til det friske vevet omkring. Den totale dosen vert levert over nokre veker i daglege "fraksjonar", noko som reduserer biverknader. Under og mellom desse fraksjonane rÞrer dei indre organa pÄ seg heile tida pÄ grunn av pust, fylling av blÊra, tarmar si rÞrsle og ekstern pÄverknad. Likevel vert posisjonen til mÄlvolumet og relevante risikoorgan bestemt pÄ grunnlag av eit statisk 3D-skann som er tatt fÞr behandlinga startar. Den vanlege mÄten Ä sikre seg mot konsekvensar av denne rÞrsla er Ä legge til marginar rundt svulsten. Slik sikrar ein Ä treffe mÄlvolumet, men til gjengjeld fÄr det friske vevet meir dose. Marginane sin storleik er fastsett ved hjelp av statistikk over tidlegare behandla pasienter. Dei statistiske metodane som vert brukte er ofte enkle, og tek berre omsyn til rigid rÞrsle, altsÄ at heile kroppen rÞrer seg i eitt. Dessutan vert det ikkje teke omsyn til rÞrsla til risikoorgan. For Ä berekne dose til risikoorgana er det vanleg Ä anta at forma til organa i planleggingsskannet er representative for forma deira under behandling. Arbeidet i denne avhandlinga handlar om Ä bruka teknikkar frÄ Bayesiansk statistikk for Ä modellere korleis organ rÞrer og deformerer seg mellom fraksjonane. MÄlet er Ä estimere nÞyaktig den statistiske fordelinga av rÞrsle for eit eller fleire organ til ein pasient. Fordelinga gjev innsikt i korleis organa forandre seg medan behandlinga gÄr for seg. Denne innsikta er nyttig for evaluering av strÄleterapiplanar, statistisk prediksjon av biverknader, sÄkalla robust planlegging og Ä berekna stÞrrelsen pÄ marginar. Metodane som vert presentert er evaluerte for endetarmen (rektum) sine rÞrsler hjÄ prostatakreftpasientar. For desse pasientane er rektum eit viktig risikoorgan, som kan bli ramma bÄde av akutte og seine biverknader, som lekkasje, blÞding og smerter. Samanlikna med eksisterande metodar har den Bayesianske tilnÊrminga to fordelar: For det fÞrste gir kombinasjonen av populasjonsstatistikk og individuelle data meir nÞyaktige anslag av den pasientspesifikke fordelinga. For det andre estimerer dei nye metodane den sÄkalla systematiske feilen i tillegg til variasjonar frÄ fraksjon til fraksjon. Den systematiske feilen er forskjellen mellom den estimerte forma pÄ organet under planlegging, og gjennomsnittsforma til organet under bestrÄling. Denne typen feil var tema for artikkel I. Her fekk vi til Ä redusere den systematiske feilen til rektum hjÄ 33 av 37 prostatakreftpasientar ved Ä bruke ein metode som kombinerer forma pÄ rektum under planlegginga og gjennomsnittsforma i populasjonen. Vi vurderte og om denne forbetringa hadde pÄverknad pÄ estimering av summert dose til rektum. Metoden gav ikkje signifikant forbetring for to antatt relevante parametrar (ekvivalent uniform dose og D5%), men gav signifikant reduksjon av bias pÄ det estimerte dose-volum-histogrammet i intervallet 52.5 Gy til 65 Gy. Hovudarbeidet i dette prosjektet er publisert i artikkel II. Der presenterer vi to modellar for organrÞrsle basert pÄ Bayesianske metodar. Inndata til desse metodane er organformer som er henta frÄ 3D-skanningar. Metodane kan ta ulikt tal slike former, og produserer meir nÞyaktige resultat jo fleire former dei fÄr. Dei gjev anslag av gjennomsnittsforma og kor stor uvissa om denne forma er, i tillegg til anslag av fordelinga av variasjon av former frÄ fraksjon til fraksjon. Vi evaluerte metodane etter kor godt dei kunne berekne "dekningssannsyn", altsÄ sannsynet for at organet skal dekke eit gitt punkt i pasientkoordinatsystemet til ei gitt tid. For denne berekninga mÄtte titusenvis av organformer gjerast om til sÄkalla binÊrmasker, som er 3D-matriser av punkter i pasient-koordinatsystemet der verdien til eit punkt er 1 dersom punktet er inne i organet, og 0 elles. Denne berekninga var mogleg pÄ grunn av programvare som blei implementert for dette prosjektet, og som er presentert i artikkel III. OgsÄ her var det prostatakreftpasientar sitt rektum som vart brukt til evaluering. Berekningane til dei nye metodane var likare det sanne dekningssannsynet enn tilsvarande berekningar frÄ tidlegare metodar, i signifikant grad, i alle fall opp til tre input. Forskjellen mellom dei to nye algoritmane er i hovudsak kompleksiteten og nÞyaktigheita, og valet mellom algoritmane i ein gitt bruk vil vere ei avveging mellom desse faktorane. Vi viste ein mÄte modellane kan verte brukte i artikkel IV, som handlar om pasientar som fÄr re-bestrÄling for tilbakefall av prostatakreft. Her brukte vi modellane til Ä berekne forventa akkumulert dose til rektum frÄ dei to behandlingane, og ogsÄ uvissa rundt den forventa dosen. Metoden er basert pÄ representative former" av rektum, altsÄ former som rektum kan ta som er sannsynlege, men lite fordelaktige. Desse formene kan brukast som visuell hjelp for onkologar og doseplanleggjarar, og metoden kan implementerast ved hjelp av eksisterande funksjonar i programvaren for behandlingsplanlegging. Overordna gir denne avhandlinga nye lÞysingar for den sentrale utfordringa med Ä redusere konsekvensar av organrÞrsle i strÄleterapi. Dei presenterte modellane er dei fÞrste som utnyttar statistikk for populasjonen og data frÄ den enkelte pasienten samstundes, og som tar omsyn til bÄde systematiske og tilfeldige feil.Modern radiotherapy tends to be highly conformal, meaning that a high and uniform dose is delivered to the target volume and as little dose as possible to the surrounding normal tissue. The total radiation dose is delivered across several smaller daily fractions, typically spanning several weeks. During and between these fractions, internal organs are constantly in motion due to factors such as breathing, changes to bladder filling state, intestinal movement and external influences. Nevertheless, the position of the target and relevant organs at risk (OARs) are determined based on a static 3D scan acquired before start of treatment. A common safeguard which is used to take such motion into account is the addition of margins around the target. These margins reduce the chance of missing parts of the target, yet increases dose to the healthy tissue surrounding the target. The margin size is based on statistics from previous patients. However, for the most part, the statistical methods used are very simple, and typically based on an assumption of rigid patient motion. Similarly, motion of the OARs is commonly neglected. For estimation of dose to the OARs, it is common to assume that the organ shape at the static scan is representative for its shape during treatment. The work in this thesis concerns the use of techniques from Bayesian statistics for modelling inter-fraction organ motion and deformation. The goal is to estimate accurately the statistical distribution of shapes for one or more organs for a given patient. The distribution provides knowledge of how the patient's organs might move and deform during the radiotherapy course. This information is useful for the evaluation of radiotherapy plans, prediction of adverse effects, so-called motion-robust radiotherapy planning, the generation of margins and more. The methods presented in this thesis have been evaluated for predicting deformations of the rectum of prostate cancer patients. For these patients, the rectum is a crucial OAR that is affected by both early and late side effects including leakage, bleeding and pain. Compared to existing methods, the Bayesian approach developed and implemented in this thesis offers two advantages: first, combining population statistics and individual data leads to more accurate estimates of the patient-specific distribution. Secondly, the new methods estimate the distribution of the so-called systematic error in addition to variations from fraction to fraction. The systematic error is the difference between the estimated shape/position of an organ at the planning stage and its average shape/position during therapy, and was the subject of paper I. Here, we were able to reduce the systematic error of the rectum in 33 out of 37 prostate cancer patients using a straightforward method to combine the shape of the rectum at the planning CT with the population mean shape. We also evaluated the impact of this improvement on the estimation of dose to the rectum. We found no significant improvement on the estimation of two presumably relevant dose parameters (equivalent uniform dose and D5%). However, we did find significant reduction in the bias of the estimated dose-volume histogram in the range from 52.5 Gy to 65 Gy. Paper II contains the central work of this project. It presents two organ deformation models based on Bayesian methods. The input data to these algorithms are organ shapes derived from 3D scans. The methods can take a varying number of such inputs from a given patient, and will produce more accurate results the more inputs they are given. They provide an estimate of the mean shape of the organ, as well as the uncertainty of this mean, in addition to the distribution of the variation of shapes from fraction to fraction. The methods were evaluated in the task of estimating coverage probabilities, i.e. the probability that the organ will cover a certain point in the patient coordinate system, for the rectum of prostate cancer patients. For this evaluation, tens of thousands of organ shapes needed to be converted to so-called binary masks, which are 3D arrays of points in the patient coordinate system where the value of each point is 1 if the point is inside the organ and 0 if it is outside. This was enabled by the highly efficient point-in-polyhedron software presented in paper III, which was developed for this project. The models were given varying number of scans, from 1 to 10, as input, and compared to two existing (non-Bayesian) models. The estimates of the coverage probability produced by the new models were significantly more similar to the ground truth than those produced by the existing models, at least up to three input scans. The main differences between the two new algorithms are their of conceptual complexity and accuracy, and the choice of method in a given application will therefore come down to a trade-off between these qualities. An application for the models derived in paper II, concerning patients receiving re-irradiation for recurrent prostate cancer, is presented in paper IV. We introduce a way of estimating the expectation and uncertainty of the accumulated dose to the rectum from the two treatment courses. The method is based on "representative shapes" of the rectum, that is, shapes that are probable and also particularly favourable or unfavourable in terms of dose. The advantage is that these shapes can be used as a visual aid for the oncologist or dose planner, and that the method can be implemented using existing features of treatment planning systems. Overall, this thesis provides novel solutions to the central challenge of organ motion mitigation in RT. The presented models are the first to simultaneously exploit population and patient specific organ motion and addressing both systematic and random errors.Doktorgradsavhandlin

    Machine Learning Techniques, Detection and Prediction of Glaucoma– A Systematic Review

    Get PDF
    Globally, glaucoma is the most common factor in both permanent blindness and impairment. However, the majority of patients are unaware they have the condition, and clinical practise continues to face difficulties in detecting glaucoma progression using current technology. An expert ophthalmologist examines the retinal portion of the eye to see how the glaucoma is progressing. This method is quite time-consuming, and doing it manually takes more time. Therefore, using deep learning and machine learning techniques, this problem can be resolved by automatically diagnosing glaucoma. This systematic review involved a comprehensive analysis of various automated glaucoma prediction and detection techniques. More than 100 articles on Machine learning (ML) techniques with understandable graph and tabular column are reviewed considering summery, method, objective, performance, advantages and disadvantages. In the ML techniques such as support vector machine (SVM), and K-means. Fuzzy c-means clustering algorithm are widely used in glaucoma detection and prediction. Through the systematic review, the most accurate technique to detect and predict glaucoma can be determined which can be utilized for future betterment

    Metric Gaussian variational inference

    Get PDF
    One main result of this dissertation is the development of Metric Gaussian Variational Inference (MGVI), a method to perform approximate inference in extremely high dimensions and for complex probabilistic models. The problem with high-dimensional and complex models is twofold. Fist, to capture the true posterior distribution accurately, a sufficiently rich approximation for it is required. Second, the number of parameters to express this richness scales dramatically with the number of model parameters. For example, explicitly expressing the correlation between all model parameters requires their squared number of correlation coefficients. In settings with millions of model parameter, this is unfeasible. MGVI overcomes this limitation by replacing the explicit covariance with an implicit approximation, which does not have to be stored and is accessed via samples. This procedure scales linearly with the problem size and allows to account for the full correlations in even extremely large problems. This makes it also applicable to significantly more complex setups. MGVI enabled a series of ambitious signal reconstructions by me and others, which will be showcased. This involves a time- and frequency-resolved reconstruction of the shadow around the black hole M87* using data provided by the Event Horizon Telescope Collaboration, a three-dimensional tomographic reconstruction of interstellar dust within 300pc around the sun from Gaia starlight-absorption and parallax data, novel medical imaging methods for computed tomography, an all-sky Faraday rotation map, combining distinct data sources, and simultaneous calibration and imaging with a radio-interferometer. The second main result is an an approach to use several, independently trained and deep neural networks to reason on complex tasks. Deep learning allows to capture abstract concepts by extracting them from large amounts of training data, which alleviates the necessity of an explicit mathematical formulation. Here a generative neural network is used as a prior distribution and certain properties are imposed via classification and regression networks. The inference is then performed in terms of the latent variables of the generator, which is done using MGVI and other methods. This allows to flexibly answer novel questions without having to re-train any neural network and to come up with novel answers through Bayesian reasoning. This novel approach of Bayesian reasoning with neural networks can also be combined with conventional measurement data
    • 

    corecore