37 research outputs found

    Reconstrução de imagens de tomografia por emissão de pósitrons com base em compressive sensing e informação a priori

    Get PDF
    Dissertação (Mestrado em Engenharia Biomédica) — Universidade de Brasília, Faculdade UnB Gama, Programa de pós-graduação em Engenharia Biomédica, 2021.A Tomografia por Emissão de Pósitrons(PET), é um exame diagnóstico no âmbito do imageamento médico não invasivo. Ela utiliza, em pequenas quantidades, materiais radioativos denominados radiofármacos, para o diagnóstico de doenças por meio de imagens. Essa técnica combina informações anatômicas e metabólicas, que permitem a detecção de doenças em fases iniciais ao medir a atividade em nível celular no corpo humano. É amplamente usada no tratamento de doenças gastrointestinais, endócrinas, cardíacas e de diversos tipos de câncer. Contudo, o custo de exame PET e o tempo de aquisição dos dados limitam seu uso. Algoritmos de reconstrução alternativos aos tradicionais têm sido desenvolvidos com o propósito de diminuir a quantidade de medidas necessárias ou o tempo em que essas medidas são adquiridas pelo escâner de PET. Uma forma de buscar esse objetivo é utilizar técnicas de reconstrução de dados sub-amostrados baseadas em Compressive Sensing(CS).O CSpermite reconstruir as imagens a partir de uma quantidade de medidas inferior à definida pelo critério de Nyquist. Nessas condições, é fundamental que exista um domínio transformado conhecido em que o sinal seja esparso e que as medidas sejam em números suficientes, o que depende do grau de esparsidade. Além disso, é necessário que o domínio em que o sinal é esparso seja incoerente com relação ao domínio de medidas.Outra abordagem que também permite uma melhora da relação entre qualidade de imagem e o número de medidas é a Pré-filtragem, uma técnica que combinada com CSpermite reduzir o número de coeficientes necessários para a reconstrução. O princípio se baseia na geração de imagens pré-filtradas em uma primeira etapa, de forma a favorecer a esparsidade em cada versão reconstruída. Posteriormente, uma etapa de composição espectral gera a imagem objetivo a partir das versões filtradas. Além disso, a informação de suporte constitui outro método que usado junto aos anteriores também diminui o tempo de aquisição dos dados, com base nas informações de cortes ou quadros prévios. De fato, o uso de Pré-filtragem e o uso de Informação a priori têm melhorado a qualidade de reconstrução de imagens em algumas técnicas de imageamento, sobretudo na Ressonância Magnética. No entanto, não foram encontrados esses algoritmos como referência na literatura para imagens de PET, provavelmente pelas dificuldades adicionais de aplicar um processo de medição computacional e seu operador adjunto que é necessário para implementação de CS.Diante desses desafios, nesse trabalho foram implementados os algoritmos de CScom Pré-filtragem, para reconstrução de sinais com uso de informação a priori, para o caso de medidas de PET. Os algoritmos, uma vez implementados, foram utilizados também sem informação a priori, para efeito de comparação. A implementação exigiu programas de otimização (minimização de l1e de lp), realizados em Octave. Para avaliação desses algoritmos, foram utilizadas imagens de um banco de dados de Tomografia por Emissão de Pósitronsnomeado Laboratóriode Neuroimagem(LONI) Image and DATA Archive(IDA).Os valoresdas medidas de PET foram calculados a partir das imagens obtidas do banco de dados e então reconstruídas empregando os algoritmos clássicos para PET (retroprojeção filtrada), CScom Pré-filtragem sem informação a priori e, por fim, o CScom Pré-filtragem. Foi feita uma comparação da relação sinal erro das imagens reconstruídas a partir de cada método. Também foi realizada uma série de análises da qualidade de reconstrução de imagens à medida em que se aumenta o número de linhas radiais. Os resultados sugerem que as imagens reconstruídas pelo método propostos são associadas a uma melhor qualidade em termos de relação sinal erro (p=3.9874e-63) em comparação com a retroprojeção filtrada. Os testes estatísticos realizados sugerem que as imagens reconstruídas pelo método proposto (CScom Pré-filtragem e informações a priori) resultam em uma melhor qualidade média da imagem em comparação aos outros dois métodos. Foram realizados testes preliminares com informação a priori com sinais de domínio unidimensional que mostraram que esses resultados melhoram em termos de SNR (dB) a qualidade da imagem reconstruída. Além disso, foi investigado o impacto que essa informação a priori pode ter nas imagens PET. Foram consideradas sequências de imagens que representam diferentes quadros temporais (ouframes). A informação que pode ser extraída de um quadro é utilizada para auxiliar na reconstrução do próximo quadro. Ao final, foi avaliado o impacto que isso tem em termos da qualidade da reconstrução conforme medida pela SNR (dB), apresentando os melhores resultados em imagens de dimensões maiores.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).Positron emission tomography (PET) is a diagnostic method within non-invasive medical imaging, which uses small amounts of radioactivematerials called radiopharmaceuticals for the diagnosis of diseases using images. This technique combines anatomical and metabolic information and allows the detection of diseases early when measuring activity at the cellular level in the human body. Different types of treatment procedures for gastrointestinal, endocrine, and heart conditions, as well as for various types of cancer, use this technique.However, the cost of PET examination and the time of data acquisition limit its use.Alternative reconstruction algorithms to the traditional ones have been developed to reduce the number of necessary measurements or the time in which the PET scanner acquires these measures. One way to achieve this goal is to use sub-sampled data reconstruction techniques based on Compressive Sensing (CS). This technique makes it possible to reconstruct the images starting with a smaller number of measurements, as compared to that defined by the Nyquist criterion. In these conditions, it is necessary to satisfy the requirement that there is a transformed domain where the signal is sparse.Prefiltering is a technique that, combined with compressive sensing, reduces the number of coefficients required for reconstruction, by first generating filtered versions of the desired image. The used filters are chosen in order to favor image sparsity, thus improving reconstruction by CS. Also, the use of support prior information constitutes another method that can be used in combination with CS and pre-filtering, and which also decreases the time used for data acquisition by exploring aspects of previous time frames or image slices.The use of pre-filtering, as well as the use of prior-information, has improved the quality of image reconstruction in some imaging techniques, especially Magnetic Resonance. However, these algorithms were not found as a reference in the literature for PET images, probably due to the additional difficulties of applying a computational measurement process and its assistant operator that is necessary for implementing CS.In this work, we implemented and evaluated our proposed method for reconstructing PET images using prior information. The algorithms were also used without prior information, for comparison purposes. The implementation required optimization programs (l1and lpminimization), which we implemented in Octave.We used images from a PET database named LONI Image and Data Archive (IDA) in order to evaluate these algorithms.In our experiments, we computed the values of the PET measurements starting from sample images obtained from the database. Then we reconstructed each image(using the measurements only, as in the real context) using the classic algorithms for PET (filtered back projection), compressive sensing with prefiltering without prior information, and finally compressive sensing with prefiltering and prior information. We made a comparison of the signal-to-noise ratios of the reconstructed images using each method. Also, we performed an analysis of the quality of image reconstruction as the number of radial lines increases. The results suggest that the proposed method significantly improves the reconstruction process in terms of objective image quality, in comparison with the filtered back projection approach.The statistical tests we used suggest that, with statistical significance, the images reconstructed by the proposed method (CS with prefiltering and a priori information) results in better average image quality compared to the other two methods.Furthermore, the impact that this a priori information may have on PET images was investigated. We considered sequences of images that represent different frames. Information that can be extracted from one frame was used to aid in the reconstruction of the following frame. At the end, the impact this has in terms of the quality of the reconstruction as measured by the SNR(dB) was evaluated, showing reconstruction improvements specially for larger images

    Hybrid state estimators for the control of remotely operated underwater vehicles

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Ocean Engineer at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution August 1988This paper explores the use of 'hybrid' state estimators to increase the accuracy and flexibility of acoustic position measurement systems used in the control of underwater vehicles. Two different approaches to extend the range of acoustic position measurement systems are explored. The first approach is the use of an inexpensive Strapdown Inertial Measurement System (SIMS) to augment the acoustic with inertial position information. This approach is based on the experience gained using an attitude and inertial measurement package fielded on the JASON JUNIOR Remotely Operated Vehicle (ROV). The second approach is the use of a mobile, platform-mounted, acoustic net in conjunction with a platform tracking system. This second investigation used the JASON ROV as the basis for the simulation work. As motivation, some of the theoretical and practical difficulties encountered when range is extended using an unaugmented system are explored. Simulation results are used to demonstrate the effects of these difficulties on position estimation accuracy and on the performance in closed loop control of the vehicle. Using measured sensor noise characteristics, a hybrid Kalman filter is developed for each approach to take the greatest advantage of the available information. Formulation of the Kalman filter is different for each case. In the second case, the geographic position of the ROV is the sum of the acoustic net's geographic position, measured at a different interval by an RF positioning system, and the position of the ROV relative to the net, as measured acoustically. Closed loop vehicle performance evaluations are made for representative noise levels and update rates with and without the augmentation discussed in the first approach. Finally, conclusions are drawn about the benefits and applications of the hybrid Kalman filter to the control of Remotely Operated Vehicles

    Real-time flutter identification

    Get PDF
    The techniques and a FORTRAN 77 MOdal Parameter IDentification (MOPID) computer program developed for identification of the frequencies and damping ratios of multiple flutter modes in real time are documented. Physically meaningful model parameterization was combined with state of the art recursive identification techniques and applied to the problem of real time flutter mode monitoring. The performance of the algorithm in terms of convergence speed and parameter estimation error is demonstrated for several simulated data cases, and the results of actual flight data analysis from two different vehicles are presented. It is indicated that the algorithm is capable of real time monitoring of aircraft flutter characteristics with a high degree of reliability

    Recording, compression and representation of dense light fields

    Get PDF
    The concept of light fields allows image based capture of scenes, providing, on a recorded dataset, many of the features available in computer graphics, like simulation of different viewpoints, or change of core camera parameters, including depth of field. Due to the increase in the recorded dimension from two for a regular image to four for a light field recording, previous works mainly concentrate on small or undersampled light field recordings. This thesis is concerned with the recording of a dense light field dataset, including the estimation of suitable sampling parameters, as well as the implementation of the required capture, storage and processing methods. Towards this goal, the influence of an optical system on the, possibly bandunlimited, light field signal is examined, deriving the required sampling rates from the bandlimiting effects of the camera and optics. To increase storage capacity and bandwidth a very fast image compression methods is introduced, providing an order of magnitude faster compression than previous methods, reducing the I/O bottleneck for light field processing. A fiducial marker system is provided for the calibration of the recorded dataset, which provides a higher number of reference points than previous methods, improving camera pose estimation. In conclusion this work demonstrates the feasibility of dense sampling of a large light field, and provides a dataset which may be used for evaluation or as a reference for light field processing tasks like interpolation, rendering and sampling.Das Konzept des Lichtfelds erlaubt eine bildbasierte Erfassung von Szenen und ermöglicht es, auf den erfassten Daten viele Effekte aus der Computergrafik zu berechnen, wie das Simulieren alternativer Kamerapositionen oder die Veränderung zentraler Parameter, wie zum Beispiel der Tiefenschärfe. Aufgrund der enorm vergrößerte Datenmenge die für eine Aufzeichnung benötigt wird, da Lichtfelder im Vergleich zu den zwei Dimensionen herkömmlicher Kameras über vier Dimensionen verfügen, haben frühere Arbeiten sich vor allem mit kleinen oder unterabgetasteten Lichtfeldaufnahmen beschäftigt. Diese Arbeit hat das Ziel eine dichte Aufnahme eines Lichtfeldes vorzunehmen. Dies beinhaltet die Berechnung adäquater Abtastparameter, sowie die Implementierung der benötigten Aufnahme-, Verarbeitungs- und Speicherprozesse. In diesem Zusammenhang werden die bandlimitierenden Effekte des optischen Aufnahmesystems auf das möglicherweise nicht bandlimiterte Signal des Lichtfeldes untersucht und die benötigten Abtastraten davon abgeleitet. Um die Bandbreite und Kapazität des Speichersystems zu erhöhen wird ein neues, extrem schnelles Verfahren der Bildkompression eingeführt, welches um eine Größenordnung schneller operiert als bisherige Methoden. Für die Kalibrierung der Kamerapositionen des aufgenommenen Datensatzes wird ein neues System von sich selbst identifizierenden Passmarken vorgestellt, welches im Vergleich zu früheren Methoden mehr Referenzpunkte auf gleichem Raum zu Verfügung stellen kann und so die Kamerakalibrierung verbessert. Kurz zusammengefasst demonstriert diese Arbeit die Durchführbarkeit der Aufnahme eines großen und dichten Lichtfeldes, und stellt einen entsprechenden Datensatz zu Verfügung. Der Datensatz ist geeignet als Referenz für die Untersuchung von Methoden zur Verarbeitung von Lichtfeldern, sowie für die Evaluation von Methoden zur Interpolation, zur Abtastung und zum Rendern

    Conference Proceedings of the 3rd Biennial Symposium on Turbulence in Liquids

    Get PDF
    The Third Biennial Symposium on Turbulence in Liquids showed further progress in the investigator\u27s ability to measure turbulence parameters and in the general understanding of turbulence. The most impressive advances in measurement seemed to be the ability to measure deeper into the turbulent boundary layer in order to obtain profiles over the entire turbulence production region and the rapid development of conditioned-sampling techniques for studying hypotheses for mechanisms

    Energy-Efficiency of Conveyor Belts in Raw Materials Industry

    Get PDF
    This book focuses on research related to the energy efficiency of conveyor transportation. The solutions presented in the Special Issue have an impact on optimizing, and thus reducing, the costs of energy consumption by belt conveyors. This is due, inter alia, to the use of better materials for conveyor belts, which reduce its rolling resistance and noise, and improve its ability to adsorb the impact energy from the material falling on the belt. The use of mobile robots designed to detect defects in the conveyor's components makes the conveyor operation safer, and means that the conveyor works for longer and there are no unplanned stops due to damage

    Research and Technology 1996: Innovation in Time and Space

    Get PDF
    As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, the John F. Kennedy Space Center is placing increasing emphasis on its advanced technology development program. This program encompasses the efforts of the Engineering Development Directorate laboratories, most of the KSC operations contractors, academia, and selected commercial industries - all working in a team effort within their own areas of expertise. This edition of the Kennedy Space Center Research and Technology 1996 Annual Report covers efforts of all these contributors to the KSC advanced technology development program, as well as our technology transfer activities

    Data Service Outsourcing and Privacy Protection in Mobile Internet

    Get PDF
    Mobile Internet data have the characteristics of large scale, variety of patterns, and complex association. On the one hand, it needs efficient data processing model to provide support for data services, and on the other hand, it needs certain computing resources to provide data security services. Due to the limited resources of mobile terminals, it is impossible to complete large-scale data computation and storage. However, outsourcing to third parties may cause some risks in user privacy protection. This monography focuses on key technologies of data service outsourcing and privacy protection, including the existing methods of data analysis and processing, the fine-grained data access control through effective user privacy protection mechanism, and the data sharing in the mobile Internet
    corecore