32 research outputs found

    Bayes-optimal inverse halftoning and statistical mechanics of the Q-Ising model

    Get PDF
    On the basis of statistical mechanics of the Q-Ising model, we formulate the Bayesian inference to the problem of inverse halftoning, which is the inverse process of representing gray-scales in images by means of black and white dots. Using Monte Carlo simulations, we investigate statistical properties of the inverse process, especially, we reveal the condition of the Bayes-optimal solution for which the mean-square error takes its minimum. The numerical result is qualitatively confirmed by analysis of the infinite-range model. As demonstrations of our approach, we apply the method to retrieve a grayscale image, such as standard image `Lenna', from the halftoned version. We find that the Bayes-optimal solution gives a fine restored grayscale image which is very close to the original.Comment: 13pages, 12figures, using elsart.cl

    Statistical Mechanics of Inverse Halftoning

    Get PDF

    Transport and turbulence in quasi-uniform and versatile Bose-Einstein condensates

    Get PDF

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Echantillonnage compressé le long de trajectoires physiquement plausibles en IRM

    Get PDF
    Magnetic Resonance Imaging~(MRI) is a non-invasive and non-ionizing imaging technique that provides images of body tissues, using the contrast sensitivity coming from the magnetic parameters (T1_1, T2_2 and proton density). Data are acquired in the kk-space, corresponding to spatial Fourier frequencies. Because of physical constraints, the displacement in the kk-space is subject to kinematic constraints. Indeed, magnetic field gradients and their temporal derivative are upper bounded. Hence, the scanning time increases with the image resolution. Decreasing scanning time is crucial to improve patient comfort, decrease exam costs, limit the image distortions~(eg, created by the patient movement), or decrease temporal resolution in functionnal MRI. Reducing scanning time can be addressed by Compressed Sensing~(CS) theory. The latter is a technique that guarantees the perfect recovery of an image from undersampled data in kk-space, by assuming that the image is sparse in a wavelet basis. Unfortunately, CS theory cannot be directly cast to the MRI setting. The reasons are: i) acquisition~(Fourier) and representation~(wavelets) bases are coherent and ii) sampling schemes obtained using CS theorems are composed of isolated measurements and cannot be realistically implemented by magnetic field gradients: the sampling is usually performed along continuous or more regular curves. However, heuristic application of CS in MRI has provided promising results. In this thesis, we aim to develop theoretical tools to apply CS to MRI and other modalities. On the one hand, we propose a variable density sampling theory to answer the first inpediment. The more the sample contains information, the more it is likely to be drawn. On the other hand, we propose sampling schemes and design sampling trajectories that fulfill acquisition constraints, while traversing the kk-space with the sampling density advocated by the theory. The second point is complex and is thus addressed step by step. First, we propose continuous sampling schemes based on random walks and on travelling salesman~(TSP) problem. Then, we propose a projection algorithm onto the space of constraints that returns the closest feasible curve of an input curve~(eg, a TSP solution). Finally, we provide an algorithm to project a measure onto a set of measures carried by parameterizations. In particular, if this set is the one carried by admissible curves, the algorithm returns a curve which sampling density is close to the measure to project. This designs an admissible variable density sampler. The reconstruction results obtained in simulations using this strategy outperform existing acquisition trajectories~(spiral, radial) by about 3~dB. They permit to envision a future implementation on a real 7~T scanner soon, notably in the context of high resolution anatomical imaging.L'imagerie par résonance magnétique (IRM) est une technique d'imagerie non invasive et non ionisante qui permet d'imager et de discriminer les tissus mous grâce à une bonne sensibilité de contraste issue de la variation de paramètres physiques (T1_1, T2_2, densité de protons) spécifique à chaque tissu. Les données sont acquises dans l'espace-kk, correspondant aux fréquences spatiales de l'image. Des contraintes physiques et matérielles contraignent le mode de fonctionnement des gradients de champ magnétique utilisés pour acquérir les données. Ainsi, ces dernières sont obtenues séquentiellement le long de trajectoires assez régulières (dérivée et dérivée seconde bornées). En conséquence, la durée d'acquisition augmente avec la résolution recherchée de l'image. Accélérer l'acquisition des données est crucial pour réduire la durée d'examen et ainsi améliorer le confort du sujet, diminuer les coûts, limiter les distorsions dans l'image~(e.g., dues au mouvement), ou encore augmenter la résolution temporelle en IRM fonctionnelle. L'échantillonnage compressif permet de sous-échantillonner l'espace-kk, et de reconstruire une image de bonne qualité en utilisant une hypothèse de parcimonie de l'image dans une base d'ondelettes. Les théories d'échantillonnage compressif s'adaptent mal à l'IRM, même si certaines heuristiques ont permis d'obtenir des résultats prometteurs. Les problèmes rencontrés en IRM pour l'application de cette théorie sont i) d'une part, les bases d'acquisition~(Fourier) et de représentation~(ondelettes) sont cohérentes ; et ii) les schémas actuellement couverts par la théorie sont composés de mesures isolées, incompatibles avec l'échantillonnage continu le long de segments ou de courbes. Cette thèse vise à développer une théorie de l'échantillonnage compressif applicable à l'IRM et à d'autres modalités. D'une part, nous proposons une théorie d'échantillonnage à densité variable pour répondre au premier point. Les échantillons les plus informatifs ont une probabilité plus élevée d'être mesurés. D'autre part, nous proposons des schémas et concevons des trajectoires qui vérifient les contraintes d'acquisition tout en parcourant l'espace-kk avec la densité prescrite dans la théorie de l'échantillonnage à densité variable. Ce second point étant complexe, il est abordé par une séquence de contributions indépendantes. D'abord, nous proposons des schémas d'échantillonnage à densité variables le long de courbes continues~(marche aléatoire, voyageur de commerce). Ensuite, nous proposons un algorithme de projection sur l'espace des contraintes qui renvoie la courbe physiquement plausible la plus proche d'une courbe donnée~(e.g., une solution du voyageur de commerce). Nous donnons enfin un algorithme de projection sur des espaces de mesures qui permet de trouver la projection d'une distribution quelconque sur l'espace des mesures porté par les courbes admissibles. Ainsi, la courbe obtenue est physiquement admissible et réalise un échantillonnage à densité variable. Les résultats de reconstruction obtenus en simulation à partir de cette méthode dépassent ceux associées aux trajectoires d'acquisition utilisées classiquement~(spirale, radiale) de plusieurs décibels (de l'ordre de 3~dB) et permettent d'envisager une implémentation prochaine à 7~Tesla notamment dans le contexte de l'imagerie anatomique haute résolution

    Nowcasting for a high-resolution weather radar network

    Get PDF
    2010 Fall.Includes bibliographical references.Short-term prediction (nowcasting) of high-impact weather events can lead to significant improvement in warnings and advisories and is of great practical importance. Nowcasting using weather radar reflectivity data has been shown to be particularly useful. The Collaborative Adaptive Sensing of the Atmosphere (CASA) radar network provides high-resolution reflectivity data amenable to producing valuable nowcasts. The high-resolution nature of CASA data requires the use of an efficient nowcasting approach, which necessitated the development of the Dynamic Adaptive Radar Tracking of Storms (DARTS) and sinc kernel-based advection nowcasting methodology. This methodology was implemented operationally in the CASA Distributed Collaborative Adaptive Sensing (DCAS) system in a robust and efficient manner necessitated by the high-resolution nature of CASA data and distributed nature of the environment in which the nowcasting system operates. Nowcasts up to 10 min to support emergency manager decision-making and 1-5 min to steer the CASA radar nodes to better observe the advecting storm patterns for forecasters and researchers are currently provided by this system. Results of nowcasting performance during the 2009 CASA IP experiment are presented. Additionally, currently state-of-the-art scale-based filtering methods were adapted and evaluated for use in the CASA DCAS to provide a scale-based analysis of nowcasting. DARTS was also incorporated in the Weather Support to Deicing Decision Making system to provide more accurate and efficient snow water equivalent nowcasts for aircraft deicing decision support relative to the radar-based nowcasting method currently used in the operational system. Results of an evaluation using data collected from 2007-2008 by the Weather Service Radar-1988 Doppler (WSR-88D) located near Denver, Colorado, and the National Center for Atmospheric Research Marshall Test Site near Boulder, Colorado, are presented. DARTS was also used to study the short-term predictability of precipitation patterns depicted by high-resolution reflectivity data observed at microalpha (0.2-2 km) to mesobeta (20-200 km) scales by the CASA radar network. Additionally, DARTS was used to investigate the performance of nowcasting rainfall fields derived from specific differential phase estimates, which have been shown to provide more accurate and robust rainfall estimates compared to those made from radar reflectivity data

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Handbook of Optical and Laser Scanning

    Get PDF
    From its initial publication titled Laser Beam Scanning in 1985 to Handbook of Optical and Laser Scanning, now in its second edition, this reference has kept professionals and students at the forefront of optical scanning technology. Carefully and meticulously updated in each iteration, the book continues to be the most comprehensive scanning resource on the market. It examines the breadth and depth of subtopics in the field from a variety of perspectives. The Second Edition covers: Technologies such as piezoelectric devices Applications of laser scanning such as Ladar (laser radar) Underwater scanning and laser scanning in CTP As laser costs come down, and power and availability increase, the potential applications for laser scanning continue to increase. Bringing together the knowledge and experience of 26 authors from England, Japan and the United States, the book provides an excellent resource for understanding the principles of laser scanning. It illustrates the significance of scanning in society today and would help the user get started in developing system concepts using scanning. It can be used as an introduction to the field and as a reference for persons involved in any aspect of optical and laser beam scanning

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore