118 research outputs found

    Part Description and Segmentation Using Contour, Surface and Volumetric Primitives

    Get PDF
    The problem of part definition, description, and decomposition is central to the shape recognition systems. The Ultimate goal of segmenting range images into meaningful parts and objects has proved to be very difficult to realize, mainly due to the isolation of the segmentation problem from the issue of representation. We propose a paradigm for part description and segmentation by integration of contour, surface, and volumetric primitives. Unlike previous approaches, we have used geometric properties derived from both boundary-based (surface contours and occluding contours), and primitive-based (quadric patches and superquadric models) representations to define and recover part-whole relationships, without a priori knowledge about the objects or object domain. The object shape is described at three levels of complexity, each contributing to the overall shape. Our approach can be summarized as answering the following question : Given that we have all three different modules for extracting volume, surface and boundary properties, how should they be invoked, evaluated and integrated? Volume and boundary fitting, and surface description are performed in parallel to incorporate the best of the coarse to fine and fine to coarse segmentation strategy. The process involves feedback between the segmentor (the Control Module) and individual shape description modules. The control module evaluates the intermediate descriptions and formulates hypotheses about parts. Hypotheses are further tested by the segmentor and the descriptors. The descriptions thus obtained are independent of position, orientation, scale, domain and domain properties, and are based purely on geometric considerations. They are extremely useful for the high level domain dependent symbolic reasoning processes, which need not deal with tremendous amount of data, but only with a rich description of data in terms of primitives recovered at various levels of complexity

    Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects

    Get PDF
    The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplif

    A probabilistic integrated object recognition and tracking framework for video sequences

    Get PDF
    Recognition and tracking of multiple objects in video sequences is one of the main challenges in computer vision that currently deserves a lot of attention from researchers. Almost all the reported approaches are very application-dependent and there is a lack of a general methodology for dynamic object recognition and tracking that can be instantiated in particular cases. In this thesis, the work is oriented towards the definition and development of such a methodology which integrates object recognition and tracking from a general perspective using a probabilistic framework called PIORT (probabilistic integrated object recognition and tracking framework). It include some modules for which a variety of techniques and methods can be applied. Some of them are well-known but other methods have been designed, implemented and tested during the development of this thesis.The first step in the proposed framework is a static recognition module that provides class probabilities for each pixel of the image from a set of local features. These probabilities are updated dynamically and supplied to a tracking decision module capable of handling full and partial occlusions. The two specific methods presented use RGB colour features and differ in the classifier implemented: one is a Bayesian method based on maximum likelihood and the other one is based on a neural network. The experimental results obtained have shown that, on one hand, the neural net based approach performs similarly and sometimes better than the Bayesian approach when they are integrated within the tracking framework. And on the other hand, our PIORT methods have achieved better results when compared to other published tracking methods. All these methods have been tested experimentally in several test video sequences taken with still and moving cameras and including full and partial occlusions of the tracked object in indoor and outdoor scenarios in a variety of cases with different levels of task complexity. This allowed the evaluation of the general methodology and the alternative methods that compose these modules.A Probabilistic Integrated Object Recognition and Tracking Framework for Video SequencesEl reconocimiento y seguimiento de múltiples objetos en secuencias de vídeo es uno de los principales desafíos en visión por ordenador que actualmente merece mucha atención de los investigadores. Casi todos los enfoques reportados son muy dependientes de la aplicación y hay carencia de una metodología general para el reconocimiento y seguimiento dinámico de objetos, que pueda ser instanciada en casos particulares. En esta tesis, el trabajo esta orientado hacia la definición y desarrollo de tal metodología, la cual integra reconocimiento y seguimiento de objetos desde una perspectiva general usando un marco probabilístico de trabajo llamado PIORT (Probabilistic Integrated Object Recognition and Tracking). Este incluye algunos módulos para los que se puede aplicar una variedad de técnicas y métodos. Algunos de ellos son bien conocidos, pero otros métodos han sido diseñados, implementados y probados durante el desarrollo de esta tesis.El primer paso en el marco de trabajo propuesto es un módulo estático de reconocimiento que provee probabilidades de clase para cada píxel de la imagen desde un conjunto de características locales. Estas probabilidades son actualizadas dinámicamente y suministradas a un modulo decisión de seguimiento capaz de manejar oclusiones parciales o totales. Se presenta dos métodos específicos usando características de color RGB pero diferentes en la implementación del clasificador: uno es un método Bayesiano basado en la máxima verosimilitud y el otro método está basado en una red neuronal. Los resultados experimentales obtenidos han mostrado que, por una parte, el enfoque basado en la red neuronal funciona similarmente y algunas veces mejor que el enfoque bayesiano cuando son integrados dentro del marco probabilístico de seguimiento. Por otra parte, nuestro método PIORT ha alcanzado mejores resultados comparando con otros métodos de seguimiento publicados. Todos estos métodos han sido probados experimentalmente en varias secuencias de vídeo tomadas con cámaras fijas y móviles incluyendo oclusiones parciales y totales del objeto a seguir, en ambientes interiores y exteriores, en diferentes tareas y niveles de complejidad. Esto ha permitido evaluar tanto la metodología general como los métodos alternativos que componen sus módulos

    Automated Complexity-Sensitive Image Fusion

    Get PDF
    To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches 2. Wavelet coefficients are computed for each of the input frames in each modality 3. Corresponding regions and points are compared using spatial and temporal information across various scales 4. Decision rules based on the results of multimodal image analysis are used to combine thewavelet coefficients from different modalities 5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties

    Hybrid machine learning approaches for scene understanding: From segmentation and recognition to image parsing

    Get PDF
    We alleviate the problem of semantic scene understanding by studies on object segmentation/recognition and scene labeling methods respectively. We propose new techniques for joint recognition, segmentation and pose estimation of infrared (IR) targets. The problem is formulated in a probabilistic level set framework where a shape constrained generative model is used to provide a multi-class and multi-view shape prior and where the shape model involves a couplet of view and identity manifolds (CVIM). A level set energy function is then iteratively optimized under the shape constraints provided by the CVIM. Since both the view and identity variables are expressed explicitly in the objective function, this approach naturally accomplishes recognition, segmentation and pose estimation as joint products of the optimization process. For realistic target chips, we solve the resulting multi-modal optimization problem by adopting a particle swarm optimization (PSO) algorithm and then improve the computational efficiency by implementing a gradient-boosted PSO (GB-PSO). Evaluation was performed using the Military Sensing Information Analysis Center (SENSIAC) ATR database, and experimental results show that both of the PSO algorithms reduce the cost of shape matching during CVIM-based shape inference. Particularly, GB-PSO outperforms other recent ATR algorithms, which require intensive shape matching, either explicitly (with pre-segmentation) or implicitly (without pre-segmentation). On the other hand, under situations when target boundaries are not obviously observed and object shapes are not preferably detected, we explored some sparse representation classification (SRC) methods on ATR applications, and developed a fusion technique that combines the traditional SRC and a group constrained SRC algorithm regulated by a sparsity concentration index for improved classification accuracy on the Comanche dataset. Moreover, we present a compact rare class-oriented scene labeling framework (RCSL) with a global scene assisted rare class retrieval process, where the retrieved subset was expanded by choosing scene regulated rare class patches. A complementary rare class balanced CNN is learned to alleviate imbalanced data distribution problem at lower cost. A superpixels-based re-segmentation was implemented to produce more perceptually meaningful object boundaries. Quantitative results demonstrate the promising performances of proposed framework on both pixel and class accuracy for scene labeling on the SIFTflow dataset, especially for rare class objects

    Perception and Motion: use of Computer Vision to solve Geometry Processing problems

    Get PDF
    Computer vision and geometry processing are often see as two different and, in a certain sense, distant fields: the first one works on two-dimensional data, while the other needs three dimensional information. But are 2D and 3D data really disconnected? Think about the human vision: each eye captures patterns of light, that are then used by the brain in order to reconstruct the perception of the observed scene. In a similar way, if the eye detects a variation in the patterns of light, we are able to understand that the scene is not static; therefore, we're able to perceive the motion of one or more object in the scene. In this work, we'll show how the perception of the 2D motion can be used in order to solve two significant problems, both dealing with three-dimensional data. In the first part, we'll show how the so-called optical flow, representing the observed motion, can be used to estimate the alignment error of a set of digital cameras looking to the same object. In the second part, we'll see how the detected 2D motion of an object can be used to better understand its underlying geometric structure by means of detecting its rigid parts and the way they are connected
    corecore