471 research outputs found

    MR-based pseudo-CT generation using water-fat decomposition and Gaussian mixture regression

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2017O uso de tomografia computorizada (CT) é considerado como a prática clínica adequada para aplicações clínicas onde a simulação da atenuação de radiação pelos tecidos corporais é necessária, tais como a correcção de atenuação dos fotões em Tomografia de Emissão de Positrões (PET) e no cálculo da dosagem a ser administrada durante o planeamento de radioterapia (RTP). Imagens de ressonância magnética (MRI) têm vindo a substituir o uso de TC em algumas aplicações, sobretudo devido ao seu superior contraste entre tecidos moles e ao facto de não usar radiação ionizante. Desta forma, técnicas como PET-MRI e o planeamento de radioterapia apenas com recurso a imagens de ressonância magnética são alvo de uma crescente atenção. No entanto, estas técnicas estão limitadas pelo facto de imagens de ressonância magnética não fornecerem informação acerca da atenuação e absorção de radiação pelos tecidos. Normalmente, de forma a solucionar este problema, uma imagem de tomografia computorizada é adquirida de forma a realizar a correcção da atenuação dos fotões, assim como a dose a ser entregue em radioterapia. No entanto, esta prática introduz erros aquando do alinhamento entre as imagens de MRI e CT, que serão propagados durante todo o procedimento. Por outro lado, o uso de radiação ionizante e os custos adicionais e tempo de aquisição associado à obtenção de múltiplas modalidades de imagem limitam a aplicação clínica destas práticas. Assim, o seguimento natural prende-se com a completa substituição do uso de CT por MRI. Desta forma, o desenvolvimento de um método para a obtenção de uma imagem equivalente a CT usando MRI é necessário, sendo a imagem resultante designada de pseudo-CT. Vários métodos foram desenvolvidos de forma a construir pseudo-CT, usando métodos baseados na anatomia do paciente ou em métodos de regressão entre CT e MRI. No entanto, no primeiro caso, erros significativos são frequentes devido ao difícil alinhamento entre as imagens em casos em que a geometria do paciente é muito diferente da presente no atlas. No segundo caso, a ausência de sinal no osso cortical em MRI, torna-o indistinguível do ar. Sequências que usam um tempo de eco muito curto são normalmente utilizadas para distinguir osso cortical de ar. No entanto, para áreas com maior dimensão, como a área pélvica, dificuldades relacionadas com o equipamento e com o ruído limitam a sua aplicação nestas áreas. Por outro lado, estes métodos utilizam frequentemente diferentes imagens de MRI de forma a obter diferentes contrastes, aumentando assim o tempo de aquisição das imagens. Nesta dissertação, é proposto um método para a obtenção de um pseudo-CT baseado na combinação de um algoritmo de decomposição de água e gordura e um modelo de regressão de mistura gaussiana para a região pélvica através da aquisição de sequências de MRI convencionais. Desta forma, a aquisição de diferentes contrastes é obtida por pós-processamento das imagens originais. Desta forma, uma imagem ponderada em T1 foi adquirida com 3 tempos de eco. Um algoritmo de decomposição do sinal de ressonância magnética em sinal proveniente de água e gordura foi utilizado, permitindo a obtenção de duas imagens, cada uma representando apenas o sinal da água e gordura, respectivamente. Usando estas duas imagens, uma imagem da fracção de gordura em cada voxel foi também calculada. Por outro lado, usando o primeiro e o terceiro eco foi possível calcular o decaimento de sinal devido a efeitos relacionados com o decaimento T2*. O método para gerar o pseudo-CT baseia- se num modelo de regressão duplo entre as variáveis relacionadas com MRI e CT. Assim, o primeiro modelo aplica-se aos tecidos moles, enquanto que o segundo modelo se aplica aos tecidos ósseos. A segmentação entre estes tecidos foi realizada através da delineação manual dos tecidos ósseos. No caso do modelo de regressão para os tecidos moles, o modelo consiste numa regressão polinomial entre as imagens da fracção de gordura e os valores de CT. A ordem do polinómio usada foi obtida pela minimização do erro absoluto médio. No caso do modelo de regressão para os tecidos ósseos, um modelo de regressão de mistura gaussiana foi aplicado usando as imagens de gordura, água, de fracção de gordura e de R2*. Estas variáveis foram selecionadas, uma vez que estudos prévios correlacionam esta com a densidade mineral óssea, que por sua vez está relacionada com as intensidades em CT. A influência de incluir no modelo de regressão informação acerca da vizinhança foi estudada através da inclusão de imagens do desvio padrão nos 27 voxéis na vizinhança das variáveis previamente incluídas no modelo. O número de componentes a usar no modelo de regressão de mistura gaussiana foi obtido através da minimização do critério de Akaike. O pseudo-CT final foi obtido pela sobreposição das imagens obtidas através do duplo modelo de regressão, seguido da aplicação de um filtro gaussiano com desvio padrão de 0.5 de forma a mitigar os erros na segmentação dos tecidos ósseos. Este método foi validado usando imagens da zona pélvica de 6 pacientes usando um procedimento leave-one-out-cross-validation (LOOCV). Durante este procedimento, o modelo foi estimado através das variáveis de 5 pacientes (imagens de treino) e aplicado às variáveis relacionadas com MRI do paciente restante (imagem de validação), de forma a gerar o pseudo-CT. Este procedimento foi repetido para todas as seis combinações de imagens de treino e de validação e os pseudo-CT obtidos foram comparados com a imagem TC correspondente. No caso do modelo para os tecidos moles, verificou-se que a utilização de um polinómio de segundo grau permitia a obtenção de melhores resultados. Da mesma forma, verificou-se que a inclusão de informação acerca da vizinhança permitia uma melhor estimativa dos valores de pseudo-CT no caso dos tecidos ósseos. A segmentação dos tecidos ósseos foi considerada adequada uma vez que o valor médio do coeficiente de Dice entre estes tecidos e o osso em CT foi de 0.91 ±0.02. O valor médio do erro absoluto entre o pseudo-CT e a correspondente CT para todos os pacientes foi de 37.76±3.11 HU, enquanto que no caso dos tecidos ósseos o valor foi de 96.61±10.49 HU. Um erro médio de -2.68 ± 6.32 HU foi obtido, denotando a presença de bias no processo. Por outro lado, valores médios de peak-to-signal-noise-ratio (PSNR) e strucutre similarity índex (SSIM) de 23.92±1.62 dB e 0.91±0.01 foram obtidos, respectivamente. Os maiores erros foram encontrados no recto, uma vez que o ar não foi considerado neste método, nas interfaces entre diferentes tecidos, devido a erros no alinhamento das imagens, e nos tecidos ósseos. Desta forma, o método de obtenção de um pseudo-CT proposto nesta dissertação demonstrou ter potencial para permitir uma correcta estimativa da intensidade em CT. Os resultados obtidos demonstram uma melhoria significativa quando comparados com outros métodos encontrados na literatura que se baseiam num método relacionado com a intensidade, enquanto que se encontram na mesma ordem de magnitude de métodos baseados na anatomia do paciente. Para além disso, quando comparados com os primeiros, este método tem a vantagem de apenas uma sequência MRI ser utilizada, levando a uma redução no tempo de aquisição e nos custos associados. Por outro lado, a principal limitação deste método prende-se com a segmentação manual dos tecidos ósseos, o que dificulta a sua implementação clínica. Desta forma, o desenvolvimento de técnicas de segmentação automáticas dos tecidos ósseos torna-se necessária, sendo exemplos destas técnicas a criação de um shape model ou através da segmentação baseada num atlas. A combinação destes métodos com o método descrito nesta dissertação pode permitir a obtenção de uma alternativa às imagens de CT para o cálculo das doses em radioterapia e correcção de atenuação em PET-MRI.Purpose: Methods for deriving computed tomography (CT) equivalent information from MRI are needed for attenuation correction in PET-MRI applications, as well as for dose planning in MRI based radiation therapy workflows, due to the lack of correlation between the MR signal and the electron density of different tissues. This dissertation presents a method to generate a pseudo-CT from MR images acquired with a conventional MR pulse sequence. Methods: A T1-weighted Fast Field Echo sequence with 3 echo times was used. A 3-point water-fat decomposition algorithm was applied to the original MR images to obtain water and fat-only images as well as a quantitative fat fraction image. A R2* image was calculated using a mono-exponential fit between the first and third echo of the original MR images. The method for generating the pseudo-CT includes a dual-model regression between the MR features and a matched CT image. The first model was applied to soft tissues, while the second-model was applied to the bone anatomy that were previously segmented. The soft-tissue regression model consists of a second-order polynomial regression between the fat fraction values in soft tissue and the HU values in the CT image, while the bone regression model consists of a Gaussian mixture regression including the water, fat, fat fraction and R2* values in bone tissues. Neighbourhood information was also included in the bone regression model by calculating an image of the standard deviation of 27-neighbourhood of each voxel in each MR related feature. The final pseudo-CT was generated by combining the pseudo-CTs from both models followed by the application of a Gaussian filter for additional smoothing. This method was validated using datasets covering the pelvic area of six patients and applying a leave-one-out-cross-validation (LOOCV) procedure. During LOOCV, the model was estimated from the MR related features and the CT data of 5 patients (training set) and applied to the MR features of the remaining patient (validation set) to generate a pseudo-CT image. This procedure was repeated for the all six training and validation data combinations and the pseudo-CTs were compared to the corresponding CT image. Results: The average mean absolute error for the HU values in the body for all patients was 37.76±3.11 HU, while the average mean absolute error in the bone anatomy was 96.61±10.49 HU. No large differences in method accuracy were noted for the different patients, except for the air in the rectum which was classified as soft tissue. The largest errors were found in the rectum and in the interfaces between different tissue types. Conclusions: The pseudo-CT generation method here proposed has the potential to provide an accurate estimation of HU values. The results here reported are substantially better than other voxel-based methods proposed. However, they are in the same range as the results presented in anatomy-based methods. Further investigation in automatic MRI bone segmentation methods is necessary to allow the automatic application of this method into clinical practice. The combination of these automatic bone segmentation methods with the model here reported is expected to provide an alternative to CT images for dose planning in radiotherapy and attenuation correction in PET-MRI

    PET/MR imaging of hypoxic atherosclerotic plaque using 64Cu-ATSM

    Get PDF
    ABSTRACT OF THE DISSERTATION PET/MR Imaging of Hypoxic Atherosclerotic Plaque Using 64Cu-ATSM by Xingyu Nie Doctor of Philosophy in Biomedical Engineering Washington University in St. Louis, 2017 Professor Pamela K. Woodard, Chair Professor Suzanne Lapi, Co-Chair It is important to accurately identify the factors involved in the progression of atherosclerosis because advanced atherosclerotic lesions are prone to rupture, leading to disability or death. Hypoxic areas have been known to be present in human atherosclerotic lesions, and lesion progression is associated with the formation of lipid-loaded macrophages and increased local inflammation which are potential major factors in the formation of vulnerable plaque. This dissertation work represents a comprehensive investigation of non-invasive identification of hypoxic atherosclerotic plaque in animal models and human subjects using the PET hypoxia imaging agent 64Cu-ATSM. We first demonstrated the feasibility of 64Cu-ATSM for the identification of hypoxic atherosclerotic plaque and evaluated the relative effects of diet and genetics on hypoxia progression in atherosclerotic plaque in a genetically-altered mouse model. We then fully validated the feasibility of using 64Cu-ATSM to image the extent of hypoxia in a rabbit model with atherosclerotic-like plaque using a simultaneous PET-MR system. We also proceeded with a pilot clinical trial to determine whether 64Cu-ATSM MR/PET scanning is capable of detecting hypoxic carotid atherosclerosis in human subjects. In order to improve the 64Cu-ATSM PET image quality, we investigated the Siemens HD (high-definition) PET software and 4 partial volume correction methods to correct for partial volume effects. In addition, we incorporated the attenuation effect of the carotid surface coil into the MR attenuation correction _-map to correct for photon attention. In the long term, this imaging strategy has the potential to help identify patients at risk for cardiovascular events, guide therapy, and add to the understanding of plaque biology in human patients

    PET/MRI attenuation estimation in the lung: A review of past, present, and potential techniques

    Get PDF
    Positron emission tomography/magnetic resonance imaging (PET/MRI) potentially offers several advantages over positron emission tomography/computed tomography (PET/CT), for example, no CT radiation dose and soft tissue images from MR acquired at the same time as the PET. However, obtaining accurate linear attenuation correction (LAC) factors for the lung remains difficult in PET/MRI. LACs depend on electron density and in the lung, these vary significantly both within an individual and from person to person. Current commercial practice is to use a single-valued population-based lung LAC, and better estimation is needed to improve quantification. Given the under-appreciation of lung attenuation estimation as an issue, the inaccuracy of PET quantification due to the use of single-valued lung LACs, the unique challenges of lung estimation, and the emerging status of PET/MRI scanners in lung disease, a review is timely. This paper highlights past and present methods, categorizing them into segmentation, atlas/mapping, and emission-based schemes. Potential strategies for future developments are also presented

    Acquisition and Reconstruction Techniques for Fat Quantification Using Magnetic Resonance Imaging

    Get PDF
    Quantifying the tissue fat concentration is important for several diseases in various organs including liver, heart, skeletal muscle and kidney. Uniquely, MRI can separate the signal from water and fat in-vivo, rendering it the most suitable imaging modality for non-invasive fat quantification. Chemical-shift-encoded MRI is commonly used for quantitative fat measurement due to its unique ability to generate a separate image for water and fat. The tissue fat concentration can be consequently estimated from the two images. However, several confounding factors can hinder the water/fat separation process, leading to incorrect estimation of fat concentration. The inhomogeneities of the main magnetic field represent the main obstacle to water/fat separation. Most existing techniques rely mainly on imposing spatial smoothness constraints to address this problem; however, these often fail to resolve large and abrupt variations in the magnetic field. A novel convex relaxation approach to water/fat separation is proposed. The technique is compared to existing methods, demonstrating its robustness to resolve abrupt magnetic field inhomogeneities. Water/fat separation requires the acquisition of multiple images with different echo-times, which prolongs the acquisition time. Bipolar acquisitions can efficiently acquire the required data in shorter time. However, they induce phase errors that significantly distort the fat measurements. A new bipolar acquisition strategy that overcomes the phase errors and provides accurate fat measurements is proposed. The technique is compared to the current clinical sequence, demonstrating its efficiency in phantoms and in-vivo experiments. The proposed acquisition technique is also applied on animal models to achieve higher spatial resolution than the current sequence. In conclusion, this dissertation describes a complete framework for accurate and precise MRI fat quantification. Novel acquisitions and reconstruction techniques that address the current challenges for fat quantification are proposed

    Probabilistic partial volume modelling of biomedical tomographic image data

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Sequential decision making in artificial musical intelligence

    Get PDF
    Over the past 60 years, artificial intelligence has grown from a largely academic field of research to a ubiquitous array of tools and approaches used in everyday technology. Despite its many recent successes and growing prevalence, certain meaningful facets of computational intelligence have not been as thoroughly explored. Such additional facets cover a wide array of complex mental tasks which humans carry out easily, yet are difficult for computers to mimic. A prime example of a domain in which human intelligence thrives, but machine understanding is still fairly limited, is music. Over the last decade, many researchers have applied computational tools to carry out tasks such as genre identification, music summarization, music database querying, and melodic segmentation. While these are all useful algorithmic solutions, we are still a long way from constructing complete music agents, able to mimic (at least partially) the complexity with which humans approach music. One key aspect which hasn't been sufficiently studied is that of sequential decision making in musical intelligence. This thesis strives to answer the following question: Can a sequential decision making perspective guide us in the creation of better music agents, and social agents in general? And if so, how? More specifically, this thesis focuses on two aspects of musical intelligence: music recommendation and human-agent (and more generally agent-agent) interaction in the context of music. The key contributions of this thesis are the design of better music playlist recommendation algorithms; the design of algorithms for tracking user preferences over time; new approaches for modeling people's behavior in situations that involve music; and the design of agents capable of meaningful interaction with humans and other agents in a setting where music plays a roll (either directly or indirectly). Though motivated primarily by music-related tasks, and focusing largely on people's musical preferences, this thesis also establishes that insights from music-specific case studies can also be applicable in other concrete social domains, such as different types of content recommendation. Showing the generality of insights from musical data in other contexts serves as evidence for the utility of music domains as testbeds for the development of general artificial intelligence techniques. Ultimately, this thesis demonstrates the overall usefulness of taking a sequential decision making approach in settings previously unexplored from this perspectiveComputer Science

    Eco-Mobility-on-Demand Service with Ride-Sharing

    Full text link
    Connected Automated Vehicles (CAV) technologies are developing rapidly, and one of its more popular application is to provide mobility-on-demand (MOD) services. However, with CAVs on the road, the fuel consumption of surface transportation may increase significantly. Travel demands could increase due to more accessible travel provided by the flexible service compared with the current public transit system. Trips from current underserved population and mode shift from walking and public transit could also increase travel demands significantly. In this research, we explore opportunities for the fuel-saving of CAVs in an urban environment from different scales, including speed trajectory optimization at intersections, data-drive fuel consumption model and eco-routing algorithm development, and eco-MOD fleet assignment. First, we proposed a speed trajectory optimization algorithm at signalized intersections. Although the optimal solution can be found through dynamic programming, the curse of dimensionality limits its computation speed and robustness. Thus, we propose the sequential approximation approach to solve a sequence of mixed integer optimization problems with quadratic objective and linear constraints. The speed and acceleration constraints at intersections due to route choice are addressed using a barrier method. In this work, we limit the problem to a single intersection due to the route choice application and only consider free flow scenarios, but the algorithm can be extended to multiple intersections and congested scenarios where a leading vehicle is included as a constraint if an intersection driver model is available. Next, we developed a fuel consumption model for route optimization. The mesoscopic fuel consumption model is developed through a data-driven approach considering the tradeoff between model complexity and accuracy. To develop the model, a large quantity of naturalistic driving data is used. Since the selected dataset doesn’t contain fuel consumption data, a microscopic fuel consumption simulator, Autonomie, is used to augment the information. Gaussian Mixture Regression is selected to build the model due to its ability to address nonlinearity. Instead of selected component number by cross-validation, we use the Bayesian formulation which models the indicator of components as a random variable which has Dirichlet distribution as prior. The model is used to estimate fuel consumption cost for routing algorithm. In this part, we assume the traffic network is static. Finally, the fuel consumption model and the eco-routing algorithm are integrated with the MOD fleet assignment. The MOD control framework models customers’ travel time requirements are as constraints, thus provides flexibility for cost function design. At the current phase, we assume the traffic network is static and use offline calculated travel time and fuel consumption to assign the fleet. To rebalance the idling vehicles, we developed a traffic network partition algorithm which minimizing the expected travel time within each cluster. A Model Predictive Control (MPC) based algorithm is developed to match idling fleet distribution with the demand distribution. A traffic simulator using Simulation of Urban MObility (SUMO) and calibrated using data from the Safety Pilot Model Deployment (SPMD) database is used to evaluate the MOD system performance. This dissertation shows that if the objective function of fleet assignment is not designed properly, even if ride-sharing is allowed, the fleet fuel consumption could increase compared with the baseline where personal vehicles are used for travel.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153446/1/xnhuang_1.pd
    • …
    corecore