1,732 research outputs found

    Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI

    Get PDF
    Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil intensity corrected endorectal MRI data performed at the MRI scanner level and when the original raw data is not available. Results: SNR and contrast-to-noise ratio (CNR) analysis in patient experiments demonstrate an average improvement of 11.7 dB and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong performance when compared to existing approaches. Conclusions: A new noise compensation method was developed for the purpose of improving the quality of coil intensity corrected endorectal MRI data performed at the MRI scanner level. We illustrate that promising noise compensation performance can be achieved for the proposed approach, which is particularly important for processing coil intensity corrected endorectal MRI data performed at the MRI scanner level and when the original raw data is not available.Comment: 23 page

    preliminary clinical evaluation of the ASTRA4D algorithm

    Get PDF
    Objectives. To propose and evaluate a four-dimensional (4D) algorithm for joint motion elimination and spatiotemporal noise reduction in low-dose dynamic myocardial computed tomography perfusion (CTP). Methods. Thirty patients with suspected or confirmed coronary artery disease were prospectively included und underwent dynamic contrast-enhanced 320-row CTP. The presented deformable image registration method ASTRA4D identifies a low-dimensional linear model of contrast propagation (by principal component analysis, PCA) of the ex-ante temporally smoothed time-intensity curves (by local polynomial regression). Quantitative (standard deviation, signal-to-noise ratio (SNR), temporal variation, volumetric deformation) and qualitative (motion, contrast, contour sharpness; 1, poor; 5, excellent) measures of CTP quality were assessed for the original and motion-compensated volumes (without and with temporal filtering, PCA/ASTRA4D). Following visual myocardial perfusion deficit detection by two readers, diagnostic accuracy was evaluated using 1.5T magnetic resonance (MR) myocardial perfusion imaging as the reference standard in 15 patients. Results. Registration using ASTRA4D was successful in all 30 patients and resulted in comparison with the benchmark PCA in significantly (p<0.001) reduced noise over time (-83%, 178.5 vs 29.9) and spatially (-34%, 21.4 vs 14.1) as well as improved SNR (+47%, 3.6 vs 5.3) and subjective image quality (motion, contrast, contour sharpness: +1.0, +1.0, +0.5). ASTRA4D resulted in significantly improved per-segment sensitivity of 91% (58/64) and similar specificity of 96% (429/446) compared with PCA (52%, 33/64; 98%, 435/446; p=0.011) and the original sequence (45%, 29/64; 98%, 438/446; p=0.003) in the visual detection of perfusion deficits. Conclusions. The proposed functional approach to temporal denoising and morphologic alignment was shown to improve quality metrics and sensitivity of 4D CTP in the detection of myocardial ischemia.Zielsetzung. Die Entwicklung und Bewertung einer Methode zur simultanen Rauschreduktion und Bewegungskorrektur für niedrig dosierte dynamische CT Myokardperfusion. Methoden. Dreißig prospektiv eingeschlossene Patienten mit vermuteter oder bestätigter koronarer Herzkrankheit wurden einer dynamischen CT Myokardperfusionsuntersuchung unterzogen. Die präsentierte Registrierungsmethode ASTRA4D ermittelt ein niedrigdimensionales Modell des Kontrastmittelflusses (mittels einer Hauptkomponentenanalyse, PCA) der vorab zeitlich geglätteten Intensitätskurven (mittels lokaler polynomialer Regression). Quantitative (Standardabweichung, Signal-Rausch-Verhältnis (SNR), zeitliche Schwankung, räumliche Verformung) und qualitative (Bewegung, Kontrast, Kantenschärfe; 1, schlecht; 5, ausgezeichnet) Kennzahlen der unbearbeiteten und bewegungskorrigierten Perfusionsdatensätze (ohne und mit zeitlicher Glättung PCA/ASTRA4D) wurden ermittelt. Nach visueller Beurteilung von myokardialen Perfusionsdefiziten durch zwei Radiologen wurde die diagnostische Genauigkeit im Verhältnis zu 1.5T Magnetresonanztomographie in 15 Patienten ermittelt. Resultate. Bewegungskorrektur mit ASTRA4D war in allen 30 Patienten erfolgreich und resultierte im Vergleich mit der PCA Methode in signifikant (p<0.001) verringerter zeitlicher Schwankung (-83%, 178.5 gegenüber 29.9) und räumlichem Rauschen (-34%, 21.4 gegenüber 14.1) sowie verbesserter SNR (+47%, 3.6 gegenüber 5.3) und subjektiven Qualitätskriterien (Bewegung, Kontrast, Kantenschärfe: +1.0, +1.0, +0.5). ASTRA4D resultierte in signifikant verbesserter segmentweiser Sensitivität 91% (58/64) und ähnlicher Spezifizität 96% (429/446) verglichen mit der PCA Methode (52%, 33/64; 98%, 435/446; p=0.011) und dem unbearbeiteten Perfusionsdatensatz (45%, 29/64; 98%, 438/446; p=0.003) in der visuellen Beurteilung von myokardialen Perfusionsdefiziten. Schlussfolgerungen. Der vorgeschlagene funktionale Ansatz zur simultanen Rauschreduktion und Bewegungskorrektur verbesserte Qualitätskriterien und Sensitivität von dynamischer CT Perfusion in der visuellen Erkennung von Myokardischämie

    Edge-enhancing Filters with Negative Weights

    Full text link
    In [DOI:10.1109/ICMEW.2014.6890711], a graph-based denoising is performed by projecting the noisy image to a lower dimensional Krylov subspace of the graph Laplacian, constructed using nonnegative weights determined by distances between image data corresponding to image pixels. We~extend the construction of the graph Laplacian to the case, where some graph weights can be negative. Removing the positivity constraint provides a more accurate inference of a graph model behind the data, and thus can improve quality of filters for graph-based signal processing, e.g., denoising, compared to the standard construction, without affecting the costs.Comment: 5 pages; 6 figures. Accepted to IEEE GlobalSIP 2015 conferenc

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    Man-machine interactive imaging and data processing using high-speed digital mass storage

    Get PDF
    The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations

    Real-Time Quantum Noise Suppression In Very Low-Dose Fluoroscopy

    Get PDF
    Fluoroscopy provides real-time X-ray screening of patient's organs and of various radiopaque objects, which make it an invaluable tool for many interventional procedures. For this reason, the number of fluoroscopy screenings has experienced a consistent growth in the last decades. However, this trend has raised many concerns about the increase in X-ray exposure, as even low-dose procedures turned out to be not as safe as they were considered, thus demanding a rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff. In this context, the use of very low-dose protocols would be extremely beneficial. Nonetheless, this would result in very noisy images, which need to be suitably denoised in real-time to support interventional procedures. Simple smoothing filters tend to produce blurring effects that undermines the visibility of object boundaries, which is essential for the human eye to understand the imaged scene. Therefore, some denoising strategies embed noise statistics-based criteria to improve their denoising performances. This dissertation focuses on the Noise Variance Conditioned Average (NVCA) algorithm, which takes advantage of the a priori knowledge of quantum noise statistics to perform noise reduction while preserving the edges and has already outperformed many state-of-the-art methods in the denoising of images corrupted by quantum noise, while also being suitable for real-time hardware implementation. Different issues are addressed that currently limit the actual use of very low-dose protocols in clinical practice, e.g. the evaluation of actual performances of denoising algorithms in very low-dose conditions, the optimization of tuning parameters to obtain the best denoising performances, the design of an index to properly measure the quality of X-ray images, and the assessment of an a priori noise characterization approach to account for time-varying noise statistics due to changes of X-ray tube settings. An improved NVCA algorithm is also presented, along with its real-time hardware implementation on a Field Programmable Gate Array (FPGA). The novel algorithm provides more efficient noise reduction performances also for low-contrast moving objects, thus relaxing the trade-off between noise reduction and edge preservation, while providing a further reduction of hardware complexity, which allows for low usage of logic resources also on small FPGA platforms. The results presented in this dissertation provide the means for future studies aimed at embedding the NVCA algorithm in commercial fluoroscopic devices to accomplish real-time denoising of very low-dose X-ray images, which would foster their actual use in clinical practice
    corecore