529 research outputs found

    Variational methods for texture segmentation

    Get PDF
    In the last decades, image production has grown significantly. From digital photographs to the medical scans, including satellite images and video films, more and more data need to be processed. Consequently the number of applications based on digital images has increased, either for medicine, research for country planning or for entertainment business such as animation or video games. All these areas, although very different one to another, need the same image processing techniques. Among all these techniques, segmentation is probably one of the most studied because of its important role. Segmentation is the process of extracting meaningful objects from an image. This task, although easily achieved by the human visual system, is actually complex and still a true challenge for the image processing community despite several decades of research. The thesis work presented in this manuscript proposes solutions to the image segmentation problem in a well established mathematical framework, i.e. variational models. The image is defined in a continuous space and the segmentation problem is expressed through a functional or energy optimization. Depending on the object to be segmented, this energy definition can be difficult; in particular for objects with ambiguous borders or objects with textures. For the latter, the difficulty lies already in the definition of the term texture. The human eye can easily recognize a texture, but it is quite difficult to find words to define it, even more in mathematical terms. There is a deliberate vagueness in the definition of texture which explains the difficulty to conceptualize a model able to describe it. Often these textures can neither be described by homogeneous regions nor by sharp contours. This is why we are first interested in the extraction of texture features, that is to say, finding one representation that can discriminate a textured region from another. The first contribution of this thesis is the construction of a texture descriptor from the representation of the image similar to a surface in a volume. This descriptor belongs to the framework of non-supervised segmentation, since it will not require any user interaction. The second contribution is a solution for the segmentation problem based on active contour models and information theory tools. Third contribution is a semi-supervised segmentation model, i.e. where constraints provided by the user will be integrated in the segmentation framework. This processus is actually derived from the graph of image patches. This graph gives the connectivity measure between the different points of the image. The segmentation will be expressed by a graph partition and a variational model. This manuscript proposes to tackle the segmentation problem for textured images

    A Comparison Study of Saliency Models for Fixation Prediction on Infants and Adults

    Get PDF
    Various saliency models have been developed over the years. The performance of saliency models is typically evaluated based on databases of experimentally recorded adult eye fixations. Although studies on infant gaze patterns have attracted much attention recently, saliency based models have not been widely applied for prediction of infant gaze patterns. In this study, we conduct a comprehensive comparison study of eight state-ofthe- art saliency models on predictions of experimentally captured fixations from infants and adults. Seven evaluation metrics are used to evaluate and compare the performance of saliency models. The results demonstrate a consistent performance of saliency models predicting adult fixations over infant fixations in terms of overlap, center fitting, intersection, information loss of approximation, and spatial distance between the distributions of saliency map and fixation map. In saliency and baselines models performance ranking, the results show that GBVS and Itti models are among the top three contenders, infants and adults have bias toward the centers of images, and all models and the center baseline model outperformed the chance baseline model

    Probabilistic Models and Inference for Multi-View People Detection in Overlapping Depth Images

    Get PDF
    Die sensorübergreifende Personendetektion in einem Netzwerk von 3D-Sensoren ist die Grundlage vieler Anwendungen, wie z.B. Personenzählung, digitale Kundenstromanalyse oder öffentliche Sicherheit. Im Gegensatz zu klassischen Verfahren der Videoüberwachung haben 3D-Sensoren dabei im Allgemeinen eine vertikale top-down Sicht auf die Szene, um das Auftreten von Verdeckungen, wie sie z.B. in einer dicht gedrängten Menschenmenge auftreten, zu reduzieren. Aufgrund der vertikalen top-down Perspektive der Sensoren variiert die äußere Erscheinung von Personen sehr stark in Abhängigkeit von deren Position in der Szene. Des Weiteren sind Personen aufgrund von Verdeckungen, Sensorrauschen sowie dem eingeschränkten Sichtfeld der top-down Sensoren häufig nur partiell in einer einzelnen Ansicht sichtbar. Um diese Herausforderungen zu bewältigen, wird in dieser Arbeit untersucht, wie die räumlich-zeitlichen Multi-View-Beobachtungen von mehreren 3D-Sensoren mit sich überlappenden Sichtbereichen effektiv genutzt werden können. Der Fokus liegt insbesondere auf der Verbesserung der Detektionsleistung durch die gemeinsame Betrachtung sowohl der redundanten als auch der komplementären Multi-Sensor-Beobachtungen, einschließlich des zeitlichen Kontextes. In der Arbeit wird das Problem der Personendetektion in einer Sequenz sich überlappender Tiefenbilder als inverses Problem formuliert. In diesem Kontext wird ein probabilistisches Modell zur Personendetektion in mehreren Tiefenbildern eingeführt. Das Modell beinhaltet ein generatives Szenenmodell, um Personen aus beliebigen Blickwinkeln zu erkennen. Basierend auf der vorgeschlagenen probabilistischen Modellierung werden mehrere Inferenzmethoden untersucht, unter anderem Gradienten-basierte kontinuierliche Optimierung, Variational Inference, sowie Convolutional Neural Networks. Dabei liegt der Schwerpunkt der Arbeit auf dem Einsatz von Variationsmethoden wie Mean-Field Variational Inference. In Abgrenzung zu klassischen Verfahren der Literatur wird hier keine Punkt-Schätzung vorgenommen, sondern die a-posteriori Wahrscheinlichkeitsverteilung der in der Szene anwesenden Personen approximiert. Durch den Einsatz des generativen Vorwärtsmodells, welches die Charakteristik der zugrundeliegenden Sensormodalität beinhaltet, ist das vorgeschlagene Verfahren weitestgehend unabhängig von der konkreten Sensormodalität. Die in der Arbeit vorgestellten Methoden werden anhand eines neu eingeführten Datensatzes zur weitflächigen Personendetektion in mehreren sich überlappenden Tiefenbildern evaluiert. Der Datensatz umfasst Bildmaterial von drei passiven Stereo-Sensoren, welche eine top-down Sicht auf eine Bürosituation vorweisen. In der Evaluation konnte nachgewiesen werden, dass die vorgeschlagene Mean-Field Variational Inference Approximation Stand-der-Technik-Resultate erzielt. Während Deep Learnig Verfahren sehr viele annotierte Trainingsdaten benötigen, basiert die in dieser Arbeit vorgeschlagene Methode auf einem expliziten probabilistischen Modell und benötigt keine Trainingsdaten. Ein weiterer Vorteil zu klassischen Verfahren, welche häufig nur eine MAP Punkt-Schätzung vornehmen, besteht in der Approximation der vollständigen Verbund-Wahrscheinlichkeitsverteilung der in der Szene anwesenden Personen

    Overcomplete Image Representations for Texture Analysis

    Get PDF
    Advisor/s: Dr. Boris Escalante-Ramírez and Dr. Gabriel Cristóbal. Date and location of PhD thesis defense: 23th October 2013, Universidad Nacional Autónoma de México.In recent years, computer vision has played an important role in many scientific and technological areas mainlybecause modern society highlights vision over other senses. At the same time, application requirements and complexity have also increased so that in many cases the optimal solution depends on the intrinsic charac-teristics of the problem; therefore, it is difficult to propose a universal image model. In parallel, advances in understanding the human visual system have allowed to propose sophisticated models that incorporate simple phenomena which occur in early stages of the visual system. This dissertation aims to investigate characteristicsof vision such as over-representation and orientation of receptive fields in order to propose bio-inspired image models for texture analysis

    VISUAL SALIENCY ANALYSIS, PREDICTION, AND VISUALIZATION: A DEEP LEARNING PERSPECTIVE

    Get PDF
    In the recent years, a huge success has been accomplished in prediction of human eye fixations. Several studies employed deep learning to achieve high accuracy of prediction of human eye fixations. These studies rely on pre-trained deep learning for object classification. They exploit deep learning either as a transfer-learning problem, or the weights of the pre-trained network as the initialization to learn a saliency model. The utilization of such pre-trained neural networks is due to the relatively small datasets of human fixations available to train a deep learning model. Another relatively less prioritized problem is amount of computation of such deep learning models requires expensive hardware. In this dissertation, two approaches are proposed to tackle abovementioned problems. The first approach, codenamed DeepFeat, incorporates the deep features of convolutional neural networks pre-trained for object and scene classifications. This approach is the first approach that uses deep features without further learning. Performance of the DeepFeat model is extensively evaluated over a variety of datasets using a variety of implementations. The second approach is a deep learning saliency model, codenamed ClassNet. Two main differences separate the ClassNet from other deep learning saliency models. The ClassNet model is the only deep learning saliency model that learns its weights from scratch. In addition, the ClassNet saliency model treats prediction of human fixation as a classification problem, while other deep learning saliency models treat the human fixation prediction as a regression problem or as a classification of a regression problem

    Coupling Image Restoration and Segmentation: A Generalized Linear Model/Bregman Perspective

    Get PDF
    We introduce a new class of data-fitting energies that couple image segmentation with image restoration. These functionals model the image intensity using the statistical framework of generalized linear models. By duality, we establish an information-theoretic interpretation using Bregman divergences. We demonstrate how this formulation couples in a principled way image restoration tasks such as denoising, deblurring (deconvolution), and inpainting with segmentation. We present an alternating minimization algorithm to solve the resulting composite photometric/geometric inverse problem. We use Fisher scoring to solve the photometric problem and to provide asymptotic uncertainty estimates. We derive the shape gradient of our data-fitting energy and investigate convex relaxation for the geometric problem. We introduce a new alternating split-Bregman strategy to solve the resulting convex problem and present experiments and comparisons on both synthetic and real-world image

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future
    corecore