99 research outputs found
A GPU-accelerated real-time NLMeans algorithm for denoising color video sequences
Abstract. The NLMeans filter, originally proposed by Buades et al., is a very popular filter for the removal of white Gaussian noise, due to its simplicity and excellent performance. The strength of this filter lies in exploiting the repetitive character of structures in images. However, to fully take advantage of the repetitivity a computationally extensive search for similar candidate blocks is indispensable. In previous work, we presented a number of algorithmic acceleration techniques for the NLMeans filter for still grayscale images. In this paper, we go one step further and incorporate both temporal information and color information into the NLMeans algorithm, in order to restore video sequences. Starting from our algorithmic acceleration techniques, we investigate how the NLMeans algorithm can be easily mapped onto recent parallel computing architectures. In particular, we consider the graphical processing unit (GPU), which is available on most recent computers. Our developments lead to a high-quality denoising filter that can process DVD-resolution video sequences in real-time on a mid-range GPU
Vers la classification non-supervisée des complexes macromoléculaires en cryo-tomographie électronique : Défis et opportunités
International audienceBackground and Objectives: Cryo electron tomography visualizes native cells at nanometer resolution, but analysis is challenged by noise and artifacts. Recently, supervised deep learning methods have been applied to decipher the 3D spatial distribution of macromolecules. However, in order to discover unknown objects, unsupervised classification techniques are necessary. In this paper, we provide an overview of unsupervised deep learning techniques, discuss the challenges to analyze cryo-ET data, and provide a proof-of-concept on real data. Methods: We propose a weakly supervised subtomogram classification method based on transfer learning. We use a deep neural network to learn a clustering friendly representation able to capture 3D shapes in the presence of noise and artifacts. This representation is learned here from a synthetic data set. Results: We show that when applying k-means clustering given a learning-based representation, it becomes possible to satisfyingly classify real subtomograms according to structural similarity. It is worth noting that no manual annotation is used for performing classification. Conclusions: We describe the advantages and limitations of our proof-of-concept and raise several perspectives for improving classification performance
A statistical model-based approach to unsupervised texture segmentation
The general problem of unsupervised textured segmentation remains a largely
unsolved issue in image analysis . Many studies proved that statistical model
based texture segmentation algorithms yield good reults provided that the model
parameters and the number of regions are known a priori . In this paper the
problem of determining the number of regions is addressed. The segmentation
algorithm relies on the analysis of second and higher order spatial statistics
of the original images. The segmentation map is represented using a Markov
Random Field model and a bayesian estimate of this map is computed using
a deterministic relaxation algorithm . The segmentation algorithm does only
require the tuning of one parameter. Results on hand-drawn images of natural
textures and real textured images show the capability of the model to yield
relevant segmentations when the number of regions and the texture classes are
not known a priori .La segmentation des images texturées constitue une étape préliminaire cruciale dans de nombreuses applications en analyse d'images. Les approches par modélisation statistique conduisent à de bons résultats dans ce domaine, lorsque les paramÚtres des modÚles statistiques et le nombre de régions à extraire sont connus a priori. La segmentation non supervisée d'images texturées reste, par contre, un problÚme délicat, auquel aucune solution complÚte n'a été apportée jusqu'à present. Nous contribuons à cet effort, en proposant une méthode de segmentation ne nécessitant pas de connaissance a priori sur le nombre ou le type de textures présentes dans l'imag
Tomographie de réseau appliquée à la simulation et à l'analyse de trafic dans des séquences d'images de microscopie à fluorescence
La technique de marquage avec la protéine GFP "Green Fluorescent Protein" et la vidéomicroscopie à fluorescence sont des outils d'investigation permettant d'observer des dynamiques et des interactions moléculaires dans des cellules vivantes, tant à l'échelle microscopique qu'à l'échelle nanoscopique. Par conséquent, il est impératif de développer de nouvelles techniques d'analyse d'images capables de quantifier les dynamiques des processus biologiques observés dans ces séquences. Ceci motive notre effort de recherche qui consiste à développer de nouvelles méthodes d'extraction d'informations à partir de données nD. Dans l'analyse de trafic, le suivi d'objets s'appuyant sur des techniques conventionnelles peut s'avérer trÚs complexe, voire impossible, surtout quand un grand nombre de petits objets coalescents sont en interaction. Néanmoins, l'estimation des trajectoires complÚtes de tous les objets n'est pas toujours nécessaire à la compréhension et la mesure de l'activité cellulaire. En effet, estimer les régions "origine" et "destination" de ces objets peut s'avérer plus pertinente. Dans cet article, nous proposons une approche originale pour inférer les zones "origine" et "destination" à partir d'informations partielles relatives au trafic. Ainsi, le trafic membranaire est assimilé à un trafic routier, ce qui permet alors d'exploiter les récentes avancées en Tomographie de Réseau (TR) bien connues dans la communauté réseaux de communication pour étudier le trafic vésiculaire. Cette approche est validée sur des séquences d'images artificielles relatives à la protéine Rab6, une GTPase impliquée dans la régulation du trafic membranaire intracellulaire
Nonlocal similarity image filtering
Abstract. We exploit the recurrence of structures at different locations, orientations and scales in an image to perform denoising. While previous methods based on ânonlocal filtering â identify corresponding patches only up to translations, we consider more general similarity transformations. Due to the additional computational burden, we break the problem down into two steps: First, we extract similarity invariant descriptors at each pixel location; second, we search for similar patches by matching descriptors. The descriptors used are inspired by scale-invariant feature transform (SIFT), whereas the similarity search is solved via the minimization of a cost function adapted from local denoising methods. Our method compares favorably with existing denoising algorithms as tested on several datasets.
Recommended from our members
Improving the TanDEM-X Digital Elevation Model for flood modelling using flood extents from Synthetic Aperture Radar images
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM.
In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies.
The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights.
The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite
Automatic Hotspots Detection for Intracellular Calcium Analysis in Fluorescence Microscopic Videos
In recent years, life-cell imaging techniques and their software applications have become powerful tools to investigate complex biological mechanisms such as calcium signalling. In this paper, we propose an automated framework to detect areas inside cells that show changes in their calcium concentration i.e. the regions of interests or hotspots, based on videos taken after loading living mouse cardiomyocytes with fluorescent calcium reporter dyes. The proposed system allows an objective and efficient analysis through the following four key stages: (1) Pre-processing to enhance video quality, (2) First level segmentation to detect candidate hotspots based on adaptive thresholding on the frame level, (3) Second-level segmentation to fuse and identify the best hotspots from the entire video by proposing the concept of calcium fluorescence hit-ratio, and (4) Extraction of the changes of calcium fluorescence over time per hotspot. From the extracted signals, different measurements are calculated such as maximum peak amplitude, area under the curve, peak frequency, and inter-spike interval of calcium changes. The system was tested using calcium imaging data collected from Heart muscle cells. The paper argues that the automated proposal offers biologists a tool to speed up the processing time and mitigate the consequences of inter-intra observer variability
DeadEasy Mito-Glia: Automatic Counting of Mitotic Cells and Glial Cells in Drosophila
Cell number changes during normal development, and in disease (e.g., neurodegeneration, cancer). Many genes affect cell number, thus functional genetic analysis frequently requires analysis of cell number alterations upon loss of function mutations or in gain of function experiments. Drosophila is a most powerful model organism to investigate the function of genes involved in development or disease in vivo. Image processing and pattern recognition techniques can be used to extract information from microscopy images to quantify automatically distinct cellular features, but these methods are still not very extended in this model organism. Thus cellular quantification is often carried out manually, which is laborious, tedious, error prone or humanly unfeasible. Here, we present DeadEasy Mito-Glia, an image processing method to count automatically the number of mitotic cells labelled with anti-phospho-histone H3 and of glial cells labelled with anti-Repo in Drosophila embryos. This programme belongs to the DeadEasy suite of which we have previously developed versions to count apoptotic cells and neuronal nuclei. Having separate programmes is paramount for accuracy. DeadEasy Mito-Glia is very easy to use, fast, objective and very accurate when counting dividing cells and glial cells labelled with a nuclear marker. Although this method has been validated for Drosophila embryos, we provide an interactive window for biologists to easily extend its application to other nuclear markers and other sample types. DeadEasy MitoGlia is freely available as an ImageJ plug-in, it increases the repertoire of tools for in vivo genetic analysis, and it will be of interest to a broad community of developmental, cancer and neuro-biologists
- âŠ