494 research outputs found

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Space-variant picture coding

    Get PDF
    PhDSpace-variant picture coding techniques exploit the strong spatial non-uniformity of the human visual system in order to increase coding efficiency in terms of perceived quality per bit. This thesis extends space-variant coding research in two directions. The first of these directions is in foveated coding. Past foveated coding research has been dominated by the single-viewer, gaze-contingent scenario. However, for research into the multi-viewer and probability-based scenarios, this thesis presents a missing piece: an algorithm for computing an additive multi-viewer sensitivity function based on an established eye resolution model, and, from this, a blur map that is optimal in the sense of discarding frequencies in least-noticeable- rst order. Furthermore, for the application of a blur map, a novel algorithm is presented for the efficient computation of high-accuracy smoothly space-variant Gaussian blurring, using a specialised filter bank which approximates perfect space-variant Gaussian blurring to arbitrarily high accuracy and at greatly reduced cost compared to the brute force approach of employing a separate low-pass filter at each image location. The second direction is that of artifi cially increasing the depth-of- field of an image, an idea borrowed from photography with the advantage of allowing an image to be reduced in bitrate while retaining or increasing overall aesthetic quality. Two synthetic depth of field algorithms are presented herein, with the desirable properties of aiming to mimic occlusion eff ects as occur in natural blurring, and of handling any number of blurring and occlusion levels with the same level of computational complexity. The merits of this coding approach have been investigated by subjective experiments to compare it with single-viewer foveated image coding. The results found the depth-based preblurring to generally be significantly preferable to the same level of foveation blurring

    Novel Ultrasound Elastography Imaging System for Breast Cancer Assessment

    Get PDF
    Abstract Most conventional methods of breast cancer screening such as X-ray, Ultrasound (US) and MRI have some issues ranging from weaknesses associated with tumour detection or classification to high cost or excessive time of image acquisition and reconstruction. Elastography is a non- invasive technique to visualize suspicious areas in soft tissues such as the breast, prostate and myocardium using tissue stiffness as image contrast mechanism. In this study, a breast Elastography system based on US imaging is proposed. This technique is fast, expected to be cost effective and more sensitive and specific compared to conventional US imaging. Unlike current Elastography techniques that image relative elastic modulus, this technique is capable of imaging absolute Young\u27s modulus (YM). In this technique, tissue displacements and surface forces used to mechanically stimulate the tissue are acquired and used as input to reconstruct the tissue YM distribution. For displacements acquisition, two techniques were used in this research: 1) a modified optical flow technique, which estimates the displacement of each node from US pre- and post-compression images and 2) Radio Frequency (RF) signal cross-correlation technique. In the former, displacements are calculated in 2 dimensions whereas in the latter, displacements are calculated in the US axial direction only. For improving the quality of elastography images, surface force data was used to calculate the stress distribution throughout the organ of interest by using an analytical model and a statistical numerical model. For force data acquisition, a system was developed in which load cells are used to measure forces on the surface of the breast. These forces are input into the stress distribution models to estimate the tissue stress distribution. By combining the stress field with the strain field calculated from the acquired displacements using Hooke\u27s law, the YM can be reconstructed efficiently. To validate the proposed technique, numerical and tissue mimicking phantom studies were conducted. For the numerical phantom study, a 3D breast-shape phantom was created with synthetic US pre- and post-compression images where the results showed the feasibility of reconstructing the absolute value of YM of tumour and background. In the tissue mimicking study, a block shape gelatine- agar phantom was constructed with a cylindrical inclusion. Results obtained from this study also indicated reasonably accurate reconstruction of the YM. The quality of the obtained elasticity images shows that image quality is improved by incorporating the adapted stress calculation techniques. Furthermore, the proposed elastography system is reasonably fast and can be potentially used in real-time clinical applications

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Measuring cellular traction forces on non-planar substrates

    Full text link
    Animal cells use traction forces to sense the mechanics and geometry of their environment. Measuring these traction forces requires a workflow combining cell experiments, image processing and force reconstruction based on elasticity theory. Such procedures have been established before mainly for planar substrates, in which case one can use the Green's function formalism. Here we introduce a worksflow to measure traction forces of cardiac myofibroblasts on non-planar elastic substrates. Soft elastic substrates with a wave-like topology were micromolded from polydimethylsiloxane (PDMS) and fluorescent marker beads were distributed homogeneously in the substrate. Using feature vector based tracking of these marker beads, we first constructed a hexahedral mesh for the substrate. We then solved the direct elastic boundary volume problem on this mesh using the finite element method (FEM). Using data simulations, we show that the traction forces can be reconstructed from the substrate deformations by solving the corresponding inverse problem with a L1-norm for the residue and a L2-norm for 0th order Tikhonov regularization. Applying this procedure to the experimental data, we find that cardiac myofibroblast cells tend to align both their shapes and their forces with the long axis of the deformable wavy substrate.Comment: 34 pages, 9 figure

    Robust perceptual organization techniques for analysis of color images

    Get PDF
    Esta tesis aborda el desarrollo de nuevas técnicas de análisis robusto de imágenes estrechamente relacionadas con el comportamiento del sistema visual humano. Uno de los pilares de la tesis es la votación tensorial, una técnica robusta que propaga y agrega información codificada en tensores mediante un proceso similar a la convolución. Su robustez y adaptabilidad han sido claves para su uso en esta tesis. Ambas propiedades han sido verificadas en tres nuevas aplicaciones de la votación tensorial: estimación de estructura, detección de bordes y segmentación de imágenes adquiridas mediante estereovisión.El mayor problema de la votación tensorial es su elevado coste computacional. En esta línea, esta tesis propone dos nuevas implementaciones eficientes de la votación tensorial derivadas de un análisis en profundidad de esta técnica.A pesar de su capacidad de adaptación, esta tesis muestra que la formulación original de la votación tensorial (a partir de aquí, votación tensorial clásica) no es adecuada para algunas aplicaciones, dado que las hipótesis en las que se basa no se ajustan a todas ellas. Esto ocurre particularmente en el filtrado de imágenes en color. Así, esta tesis muestra que, más que un método, la votación tensorial es una metodología en la que la codificación y el proceso de votación pueden ser adaptados específicamente para cada aplicación, manteniendo el espíritu de la votación tensorial.En esta línea, esta tesis propone un marco unificado en el que se realiza a la vez el filtrado de imágenes y la detección robusta de bordes. Este marco de trabajo es una extensión de la votación tensorial clásica en la que el color y la probabilidad de encontrar un borde en cada píxel se codifican mediante tensores, y en el que el proceso de votación se basa en un conjunto de criterios perceptuales relacionados con el modo en que el sistema visual humano procesa información. Los avances recientes en la percepción del color han sido esenciales en el diseño de dicho proceso de votación.Este nuevo enfoque ha sido efectivo, obteniendo excelentes resultados en ambas aplicaciones. En concreto, el nuevo método aplicado al filtrado de imágenes tiene un mejor rendimiento que los métodos del estado del arte para ruido real. Esto lo hace más adecuado para aplicaciones reales, donde los algoritmos de filtrado son imprescindibles. Además, el método aplicado a detección de bordes produce resultados más robustos que las técnicas del estado del arte y tiene un rendimiento competitivo con relación a la completitud, discriminabilidad, precisión y rechazo de falsas alarmas.Además, esta tesis demuestra que este nuevo marco de trabajo puede combinarse con otras técnicas para resolver el problema de segmentación robusta de imágenes. Los tensores obtenidos mediante el nuevo método se utilizan para clasificar píxeles como probablemente homogéneos o no homogéneos. Ambos tipos de píxeles se segmentan a continuación por medio de una variante de un algoritmo eficiente de segmentación de imágenes basada en grafos. Los experimentos muestran que el algoritmo propuesto obtiene mejores resultados en tres de las cinco métricas de evaluación aplicadas en comparación con las técnicas del estado del arte, con un coste computacional competitivo.La tesis también propone nuevas técnicas de evaluación en el ámbito del procesamiento de imágenes. En concreto, se proponen dos métricas de filtrado de imágenes con el fin de medir el grado en que un método es capaz de preservar los bordes y evitar la introducción de defectos. Asimismo, se propone una nueva metodología para la evaluación de detectores de bordes que evita posibles sesgos introducidos por el post-procesado. Esta metodología se basa en cinco métricas para estimar completitud, discriminabilidad, precisión, rechazo de falsas alarmas y robustez. Por último, se proponen dos nuevas métricas no paramétricas para estimar el grado de sobre e infrasegmentación producido por los algoritmos de segmentación de imágenes.This thesis focuses on the development of new robust image analysis techniques more closely related to the way the human visual system behaves. One of the pillars of the thesis is the so called tensor voting technique. This is a robust perceptual organization technique that propagates and aggregates information encoded by means of tensors through a convolution like process. Its robustness and adaptability have been one of the key points for using tensor voting in this thesis. These two properties are verified in the thesis by applying tensor voting to three applications where it had not been applied so far: image structure estimation, edge detection and image segmentation of images acquired through stereo vision.The most important drawback of tensor voting is that its usual implementations are highly time consuming. In this line, this thesis proposes two new efficient implementations of tensor voting, both derived from an in depth analysis of this technique.Despite its adaptability, this thesis shows that the original formulation of tensor voting (hereafter, classical tensor voting) is not adequate for some applications, since the hypotheses from which it is based are not suitable for all applications. This is particularly certain for color image denoising. Thus, this thesis shows that, more than a method, tensor voting can be thought of as a methodology in which the encoding and voting process can be tailored for every specific application, while maintaining the tensor voting spirit.By following this reasoning, this thesis proposes a unified framework for both image denoising and robust edge detection.This framework is an extension of the classical tensor voting in which both color and edginess the likelihood of finding an edge at every pixel of the image are encoded through tensors, and where the voting process takes into account a set of plausible perceptual criteria related to the way the human visual system processes visual information. Recent advances in the perception of color have been essential for designing such a voting process.This new approach has been found effective, since it yields excellent results for both applications. In particular, the new method applied to image denoising has a better performance than other state of the art methods for real noise. This makes it more adequate for real applications, in which an image denoiser is indeed required. In addition, the method applied to edge detection yields more robust results than the state of the art techniques and has a competitive performance in recall, discriminability, precision, and false alarm rejection.Moreover, this thesis shows how the results of this new framework can be combined with other techniques to tackle the problem of robust color image segmentation. The tensors obtained by applying the new framework are utilized to classify pixels into likely homogeneous and likely inhomogeneous. Those pixels are then sequentially segmented through a variation of an efficient graph based image segmentation algorithm. Experiments show that the proposed segmentation algorithm yields better scores in three of the five applied evaluation metrics when compared to the state of the art techniques with a competitive computational cost.This thesis also proposes new evaluation techniques in the scope of image processing. First, two new metrics are proposed in the field of image denoising: one to measure how an algorithm is able to preserve edges, and the second to measure how a method is able not to introduce undesirable artifacts. Second, a new methodology for assessing edge detectors that avoids possible bias introduced by post processing is proposed. It consists of five new metrics for assessing recall, discriminability, precision, false alarm rejection and robustness. Finally, two new non parametric metrics are proposed for estimating the degree of over and undersegmentation yielded by image segmentation algorithms

    On Fresnelets, interference fringes, and digital holography

    Get PDF
    In this thesis, we describe new approaches and methods for reconstructing complex-valued wave fields from digital holograms. We focus on Fresnel holograms recorded in an off-axis geometry, for which operational real-time acquisition setups readily exist. The three main research directions presented are the following. First, we derive the necessary tools to port methods and concepts of wavelet-based approaches to the field of digital holography. This is motivated by the flexibility, the robustness, and the unifying view that such multiresolution procedures have brought to many applications in image processing. In particular, we put emphasis on space-frequency processing and sparse signal representations. Second, we propose to decouple the demodulation from the propagation problem, which are both inherent to digital Fresnel holography. To this end, we derive a method for retrieving the amplitude and phase of the object wave through a local analysis of the hologram's interference fringes. Third, since digital holography reconstruction algorithms involve a number of parametric models, we propose automatic adjustment methods of the corresponding parameters. We start by investigating the Fresnel transform, which plays a central role in both the modeling of the acquisition procedure and the reconstruction of complex wave fields. The study of the properties that are central to wavelet and multiresolution analysis leads us to derive Fresnelets, a new family of wavelet-like bases. Fresnelets permit the analysis of holograms with a good localization in space and frequency, in a way similar to wavelets for images. Since the relevant information in a Fresnel off-axis hologram may be separated both in space and frequency, we propose an approach for selectively retrieving the information in the Fresnelet domain. We show that in certain situations, this approach is superior to others that exclusively rely on the separation in space or frequency. We then derive a least-squares method for the estimation of the object wave's amplitude and phase. The approach, which is reminiscent of phase-shifting techniques, is sufficiently general to be applied in a wide variety of situations, including those dictated by the use of microscopy objectives. Since it is difficult to determine the reconstruction distance manually, we propose an automatic procedure. We take advantage of our separate treatment of the phase retrieval and propagation problems to come up with an algorithm that maximizes a sharpness metric related to the sparsity of the signal's expansion in distance-dependent Fresnelet bases. Based on a simulation study, we suggest a number of guidelines for deciding which algorithm to apply to a given problem. We compare existing and the newly proposed solutions in a wide variety of situations. Our final conclusion is that the proposed methods result in flexible algorithms that are competitive with preexisting ones and superior to them in many cases. Overall, they may be applied in a wide range of experimental situations at a low computational cost

    Semantic Segmentation Network Stacking with Genetic Programming

    Get PDF
    Bakurov, I., Buzzelli, M., Schettini, R., Castelli, M., & Vanneschi, L. (2023). Semantic Segmentation Network Stacking with Genetic Programming. Genetic Programming And Evolvable Machines, 24(2 — Special Issue on Highlights of Genetic Programming 2022 Events), 1-37. [15]. https://doi.org/10.1007/s10710-023-09464-0---Open access funding provided by FCT|FCCN (b-on). This work was supported by national funds through the FCT (Fundação para a Ciência e a Tecnologia) by the projects GADgET (DSAIPA/DS/0022/2018), AICE (DSAIPA/DS/0113/2019), UIDB/04152/2020 - Centro de Investigação em Gestão de Informação (MagIC)/NOVA IMS, and by the grant SFRH/BD/137277/2018.Semantic segmentation consists of classifying each pixel of an image and constitutes an essential step towards scene recognition and understanding. Deep convolutional encoder–decoder neural networks now constitute state-of-the-art methods in the field of semantic segmentation. The problem of street scenes’ segmentation for automotive applications constitutes an important application field of such networks and introduces a set of imperative exigencies. Since the models need to be executed on self-driving vehicles to make fast decisions in response to a constantly changing environment, they are not only expected to operate reliably but also to process the input images rapidly. In this paper, we explore genetic programming (GP) as a meta-model that combines four different efficiency-oriented networks for the analysis of urban scenes. Notably, we present and examine two approaches. In the first approach, we represent solutions as GP trees that combine networks’ outputs such that each output class’s prediction is obtained through the same meta-model. In the second approach, we propose representing solutions as lists of GP trees, each designed to provide a unique meta-model for a given target class. The main objective is to develop efficient and accurate combination models that could be easily interpreted, therefore allowing gathering some hints on how to improve the existing networks. The experiments performed on the Cityscapes dataset of urban scene images with semantic pixel-wise annotations confirm the effectiveness of the proposed approach. Specifically, our best-performing models improve systems’ generalization ability by approximately 5% compared to traditional ensembles, 30% for the less performing state-of-the-art CNN and show competitive results with respect to state-of-the-art ensembles. Additionally, they are small in size, allow interpretability, and use fewer features due to GP’s automatic feature selection.publishersversionepub_ahead_of_prin
    corecore