13 research outputs found

    Joint Color-Spatial-Directional clustering and Region Merging (JCSD-RM) for unsupervised RGB-D image segmentation

    No full text
    International audienceRecent advances in depth imaging sensors provide easy access to the synchronized depth with color, called RGB-D image. In this paper, we propose an unsupervised method for indoor RGB-D image segmentation and analysis. We consider a statistical image generation model based on the color and geometry of the scene. Our method consists of a joint color-spatial-directional clustering method followed by a statistical planar region merging method. We evaluate our method on the NYU depth database and compare it with existing unsupervised RGB-D segmentation methods. Results show that, it is comparable with the state of the art methods and it needs less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner

    Joint segmentation of color and depth data based on splitting and merging driven by surface fitting

    Get PDF
    This paper proposes a segmentation scheme based on the joint usage of color and depth data together with a 3D surface estimation scheme. Firstly a set of multi-dimensional vectors is built from color, geometry and surface orientation information. Normalized cuts spectral clustering is then applied in order to recursively segment the scene in two parts thus obtaining an over-segmentation. This procedure is followed by a recursive merging stage where close segments belonging to the same object are joined together. At each step of both procedures a NURBS model is fitted on the computed segments and the accuracy of the fitting is used as a measure of the plausibility that a segment represents a single surface or object. By comparing the accuracy to the one at the previous step, it is possible to determine if each splitting or merging operation leads to a better scene representation and consequently whether to perform it or not. Experimental results show how the proposed method provides an accurate and reliable segmentation

    Segmentation and semantic labelling of RGBD data with convolutional neural networks and surface fitting

    Get PDF
    We present an approach for segmentation and semantic labelling of RGBD data exploiting together geometrical cues and deep learning techniques. An initial over-segmentation is performed using spectral clustering and a set of non-uniform rational B-spline surfaces is fitted on the extracted segments. Then a convolutional neural network (CNN) receives in input colour and geometry data together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a vector of descriptors for each sample. In the next step, an iterative merging algorithm recombines the output of the over-segmentation into larger regions matching the various elements of the scene. The couples of adjacent segments with higher similarity according to the CNN features are candidate to be merged and the surface fitting accuracy is used to detect which couples of segments belong to the same surface. Finally, a set of labelled segments is obtained by combining the segmentation output with the descriptors from the CNN. Experimental results show how the proposed approach outperforms state-of-the-art methods and provides an accurate segmentation and labelling

    Joint segmentation of color and depth data based on splitting and merging driven by surface fitting

    Get PDF
    This paper proposes a segmentation scheme based on the joint usage of color and depth data together with a 3D surface estimation scheme. Firstly a set of multi-dimensional vectors is built from color, geometry and surface orientation information. Normalized cuts spectral clustering is then applied in order to recursively segment the scene in two parts thus obtaining an over-segmentation. This procedure is followed by a recursive merging stage where close segments belonging to the same object are joined together. At each step of both procedures a NURBS model is fitted on the computed segments and the accuracy of the fitting is used as a measure of the plausibility that a segment represents a single surface or object. By comparing the accuracy to the one at the previous step, it is possible to determine if each splitting or merging operation leads to a better scene representation and consequently whether to perform it or not. Experimental results show how the proposed method provides an accurate and reliable segmentation

    Scene Segmentation Driven by Deep Learning and Surface Fitting

    Get PDF
    This paper proposes a joint color and depth segmentation scheme exploiting together geometrical clues and a learning stage. The approach starts from an initial over-segmentation based on spectral clustering. The input data is also fed to a Convolutional Neural Network (CNN) thus producing a per-pixel descriptor vector for each scene sample. An iterative merging procedure is then used to recombine the segments into the regions corresponding to the various objects and surfaces. The proposed algorithm starts by considering all the adjacent segments and computing a similarity metric according to the CNN features. The couples of segments with higher similarity are considered for merging. Finally the algorithm uses a NURBS surface fitting scheme on the segments in order to understand if the selected couples correspond to a single surface. The comparison with state-of-the-art methods shows how the proposed method provides an accurate and reliable scene segmentation

    Segmentation of color and depth data based on surface fitting

    Get PDF
    This thesis presents novel iterative schemes for the segmentation of scenes acquired by RGB-D sensors. Both the problems of objects segmentation and of semantic segmentation (labeling) are considered. The first building block of the proposed methods is the Normalized Cuts algorithm, based on graph theory and spectral clustering techniques, that provides a segmentation exploiting both geometry and color information. A limitation is the fact that the number of segments (equivalently, the number of objects in the scene) must either be decided in advance, or requires an arbitrary threshold on the normalized cut measure to be controlled. In addition, this method tends to provide segments of similar size, while in many real world scenes the dimensions of the objects and structures are widely variable. To overcome these drawbacks, we present iterative schemes based on the approximation with parametric NURBS surfaces (Non-Uniform Rational B-Splines). The key idea is to consider the results of the surface fitting as an estimation of how good the current segmentation is. This makes it possible to build region splitting and region merging procedures, in which the fitting results are compared at each step against the previous ones, and the iterations are moved forward based on whether they turn out to be improved or not, until an optimal final solution is reached. The rationale is that, if a segment properly corresponds to an actual object in the scene, the fitting result is expected to be good, while segments that need to be subdivided or merged with other ones are expected to give a larger error. A discussion of several possible metrics to evaluate the quality of the surface fitting is presented. In all the presented schemes, the employment of NURBS surfaces approximation is a novel contribution. Subsequently, it is described how the proposed iterative schemes can be coupled with a Deep Learning classification step performed with CNNs (Convolutional Neural Networks), by introducing a measure of similarity between the elements of an initial over-segmentation. This information is used together with the surface fitting results to control the steps of a revised iterative region merging procedure. In addition, some information (fitting error, surface curvatures) resulting from the NURBS fitting on the initial over-segmentation is fed into the Convolutional Neural Networks themselves. To the best of our knowledge, this is the first work where this kind of information is used within a Deep Learning framework. Finally, the objects segmentation resulting from the region merging procedure is exploited to effectively improve the initial classification. An extensive evaluation of the proposed methods is performed, with quantitative comparison against several state-of-the-art approaches on a standard dataset. The experimental results show that the proposed schemes provide equivalent or better results with respect to the competing approaches on most of the considered scenes, both for the task of objects segmentation and for the task of semantic labeling. In particular, the optimal number of segments is automatically provided by the iterative procedures, while it must be arbitrarily set in advance on several other segmentation algorithms. Moreover, no assumption is done on the objects shape, while some competing methods are optimized for planar surfaces. This is provided by the usage of NURBS surfaces as geometric model, since they can represent both simple entities as planes, spheres, cylinders, and complex free-form shapes

    Segmentación no supervisada de imágenes RGB-D

    Get PDF
    El propósito de un método de segmentación es descomponer una imagen en sus partes constitutivas. La segmentación es generalmente la primera etapa en un sistema de análisis de imágenes, y es una de las tareas más críticas debido a que su resultado afectará las etapas siguientes.El objetivo central de esta tarea consiste en agrupar objetos perceptualmente similares basándose en ciertas características en una imagen. Tradicionalmente las aplicaciones de procesamiento de imágenes, visión por computador y robótica se han centrado en las imágenes a color. Sin embargo, el uso de la información de color es limitado hasta cierto a punto debido a que las imágenes obtenidas con cámaras tradicionales no pueden registrar toda la información que la escena tridimensional provee. Una alternativa para afrontar estas dificultades y otorgarle mayor robustez a los algoritmos de segmentación aplicados sobre imágenes obtenidas con cámaras tradicionales es incorporar la información de profundidad perdida en el proceso de captura. Las imágenes que contienen información de color de la escena, y la profundidad de los objetos se denominan imágenes RGB-D Un punto clave de los métodos para segmentar imágenes utilizando datos de color y distancia, es determinar cual es la mejor forma de fusionar estas dos fuentes de información con el objetivo de extraer con mayor precisión los objetos presentes en la escena. Un gran numero de técnicas utilizan métodos de aprendizaje supervisado. Sin embargo, en muchos casos no existen bases de datos que permitan utilizar técnicas supervisadas y en caso de existir, los costos de realizar el entrenamiento de estos métodos puede ser prohibitivo. Las técnicas no supervisadas, a diferencia de las supervisadas, no requieren una fase de entrenamiento a partir de un conjunto de entrenamiento por lo que pueden ser utilizadas en un amplio campo de aplicaciones. En el marco de este trabajo de especialización es de particular interés el análisis de los métodos actuales de segmentación no supervisada de imágenes RGB-D. Un segundo objetivo del presente trabajo es analizar las métricas de evaluación que permiten indicar la calidad del proceso de segmentación.Facultad de Informátic

    Segmentación no supervisada de imágenes RGB-D

    Get PDF
    El propósito de un método de segmentación es descomponer una imagen en sus partes constitutivas. La segmentación es generalmente la primera etapa en un sistema de análisis de imágenes, y es una de las tareas más críticas debido a que su resultado afectará las etapas siguientes.El objetivo central de esta tarea consiste en agrupar objetos perceptualmente similares basándose en ciertas características en una imagen. Tradicionalmente las aplicaciones de procesamiento de imágenes, visión por computador y robótica se han centrado en las imágenes a color. Sin embargo, el uso de la información de color es limitado hasta cierto a punto debido a que las imágenes obtenidas con cámaras tradicionales no pueden registrar toda la información que la escena tridimensional provee. Una alternativa para afrontar estas dificultades y otorgarle mayor robustez a los algoritmos de segmentación aplicados sobre imágenes obtenidas con cámaras tradicionales es incorporar la información de profundidad perdida en el proceso de captura. Las imágenes que contienen información de color de la escena, y la profundidad de los objetos se denominan imágenes RGB-D Un punto clave de los métodos para segmentar imágenes utilizando datos de color y distancia, es determinar cual es la mejor forma de fusionar estas dos fuentes de información con el objetivo de extraer con mayor precisión los objetos presentes en la escena. Un gran numero de técnicas utilizan métodos de aprendizaje supervisado. Sin embargo, en muchos casos no existen bases de datos que permitan utilizar técnicas supervisadas y en caso de existir, los costos de realizar el entrenamiento de estos métodos puede ser prohibitivo. Las técnicas no supervisadas, a diferencia de las supervisadas, no requieren una fase de entrenamiento a partir de un conjunto de entrenamiento por lo que pueden ser utilizadas en un amplio campo de aplicaciones. En el marco de este trabajo de especialización es de particular interés el análisis de los métodos actuales de segmentación no supervisada de imágenes RGB-D. Un segundo objetivo del presente trabajo es analizar las métricas de evaluación que permiten indicar la calidad del proceso de segmentación.Facultad de Informátic

    Deep learning for scene understanding with color and depth data

    Get PDF
    Significant advancements have been made in the recent years concerning both data acquisition and processing hardware, as well as optimization and machine learning techniques. On one hand, the introduction of depth sensors in the consumer market has made possible the acquisition of 3D data at a very low cost, allowing to overcome many of the limitations and ambiguities that typically affect computer vision applications based on color information. At the same time, computationally faster GPUs have allowed researchers to perform time-consuming experimentations even on big data. On the other hand, the development of effective machine learning algorithms, including deep learning techniques, has given a highly performing tool to exploit the enormous amount of data nowadays at hand. Under the light of such encouraging premises, three classical computer vision problems have been selected and novel approaches for their solution have been proposed in this work that both leverage the output of a deep Convolutional Neural Network (ConvNet) as well jointly exploit color and depth data to achieve competing results. In particular, a novel semantic segmentation scheme for color and depth data is presented that uses the features extracted from a ConvNet together with geometric cues. A method for 3D shape classification is also proposed that uses a deep ConvNet fed with specific 3D data representations. Finally, a ConvNet for ToF and stereo confidence estimation has been employed underneath a ToF-stereo fusion algorithm thus avoiding to rely on complex yet inaccurate noise models for the confidence estimation task
    corecore