533 research outputs found
Automatic blur detection for meta-data extraction in content-based retrieval context
International audienceDuring the last few years, image by content retrieval is the aim of many studies. A lot of systems were introduced in order to achieve image indexation. One of the most common method is to compute a segmentation and to extract different parameters from regions. However, this segmentation step is based on low level knowledge, without taking into account simple perceptual aspects of images, like the blur. When a photographer decides to focus only on some objects in a scene, he certainly considers very differently these objects from the rest of the scene. It does not represent the same amount of information. The blurry regions may generally be considered as the context and not as the information container by image retrieval tools. Our idea is then to focus the comparison between images by restricting our study only on the non blurry regions, using then these meta data. Our aim is to introduce different features and a machine learning approach in order to reach blur identification in scene images
Automated Segmentation of Cerebral Aneurysm Using a Novel Statistical Multiresolution Approach
Cerebral Aneurysm (CA) is a vascular disease that threatens the lives of
many adults. It a ects almost 1:5 - 5% of the general population. Sub-
Arachnoid Hemorrhage (SAH), resulted by a ruptured CA, has high rates of
morbidity and mortality. Therefore, radiologists aim to detect it and diagnose
it at an early stage, by analyzing the medical images, to prevent or reduce its
damages.
The analysis process is traditionally done manually. However, with the
emerging of the technology, Computer-Aided Diagnosis (CAD) algorithms are
adopted in the clinics to overcome the traditional process disadvantages, as
the dependency of the radiologist's experience, the inter and intra observation
variability, the increase in the probability of error which increases consequently
with the growing number of medical images to be analyzed, and the artifacts
added by the medical images' acquisition methods (i.e., MRA, CTA, PET, RA,
etc.) which impedes the radiologist' s work.
Due to the aforementioned reasons, many research works propose di erent
segmentation approaches to automate the analysis process of detecting a CA
using complementary segmentation techniques; but due to the challenging task
of developing a robust reproducible reliable algorithm to detect CA regardless
of its shape, size, and location from a variety of the acquisition methods, a
diversity of proposed and developed approaches exist which still su er from
some limitations.
This thesis aims to contribute in this research area by adopting two promising
techniques based on the multiresolution and statistical approaches in the
Two-Dimensional (2D) domain. The rst technique is the Contourlet Transform
(CT), which empowers the segmentation by extracting features not apparent
in the normal image scale. While the second technique is the Hidden
Markov Random Field model with Expectation Maximization (HMRF-EM),
which segments the image based on the relationship of the neighboring pixels
in the contourlet domain.
The developed algorithm reveals promising results on the four tested Three-
Dimensional Rotational Angiography (3D RA) datasets, where an objective
and a subjective evaluation are carried out. For the objective evaluation, six
performance metrics are adopted which are: accuracy, Dice Similarity Index
(DSI), False Positive Ratio (FPR), False Negative Ratio (FNR), speci city,
and sensitivity. As for the subjective evaluation, one expert and four observers
with some medical background are involved to assess the segmentation visually.
Both evaluations compare the segmented volumes against the ground
truth data
Content-based image retrieval of museum images
Content-based image retrieval (CBIR) is becoming more and more important with the advance of multimedia and imaging technology. Among many retrieval features associated with CBIR, texture retrieval is one of the most difficult. This is mainly because no satisfactory quantitative definition of texture exists at this time, and also because of the complex nature of the texture itself. Another difficult problem in CBIR is query by low-quality images, which means attempts to retrieve images using a poor quality image as a query. Not many content-based retrieval systems have addressed the problem of query by low-quality images. Wavelet analysis is a relatively new and promising tool for signal and image analysis. Its time-scale representation provides both spatial and frequency information, thus giving extra information compared to other image representation schemes. This research aims to address some of the problems of query by texture and query by low quality images by exploiting all the advantages that wavelet analysis has to offer, particularly in the context of museum image collections. A novel query by low-quality images algorithm is presented as a solution to the problem of poor retrieval performance using conventional methods. In the query by texture problem, this thesis provides a comprehensive evaluation on wavelet-based texture method as well as comparison with other techniques. A novel automatic texture segmentation algorithm and an improved block oriented decomposition is proposed for use in query by texture. Finally all the proposed techniques are integrated in a content-based image retrieval application for museum image collections
Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models
To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented.
The modeling of increasing level of information is used to extract, represent and link image features to semantic content.
The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images
Patch-based semantic labelling of images.
PhDThe work presented in this thesis is focused at associating a semantics
to the content of an image, linking the content to high level
semantic categories. The process can take place at two levels: either
at image level, towards image categorisation, or at pixel level, in se-
mantic segmentation or semantic labelling. To this end, an analysis
framework is proposed, and the different steps of part (or patch) extraction,
description and probabilistic modelling are detailed. Parts of
different nature are used, and one of the contributions is a method to
complement information associated to them. Context for parts has to
be considered at different scales. Short range pixel dependences are accounted
by associating pixels to larger patches. A Conditional Random
Field, that is, a probabilistic discriminative graphical model, is used
to model medium range dependences between neighbouring patches.
Another contribution is an efficient method to consider rich neighbourhoods
without having loops in the inference graph. To this end, weak
neighbours are introduced, that is, neighbours whose label probability
distribution is pre-estimated rather than mutable during the inference.
Longer range dependences, that tend to make the inference problem
intractable, are addressed as well. A novel descriptor based on local
histograms of visual words has been proposed, meant to both complement
the feature descriptor of the patches and augment the context
awareness in the patch labelling process. Finally, an alternative approach
to consider multiple scales in a hierarchical framework based
on image pyramids is proposed. An image pyramid is a compositional
representation of the image based on hierarchical clustering. All the
presented contributions are extensively detailed throughout the thesis,
and experimental results performed on publicly available datasets are
reported to assess their validity. A critical comparison with the state
of the art in this research area is also presented, and the advantage in
adopting the proposed improvements are clearly highlighted
Text Segmentation in Web Images Using Colour Perception and Topological Features
The research presented in this thesis addresses the problem of Text Segmentation in Web images. Text is routinely created in image form (headers, banners etc.) on Web pages, as an attempt to overcome the stylistic limitations of HTML. This text however, has a potentially high semantic value in terms of indexing and searching for the corresponding Web pages. As current search engine technology does not allow for text extraction and recognition in images, the text in image form is ignored. Moreover, it is desirable to obtain a uniform representation of all visible text of a Web page (for applications such as voice browsing or automated content analysis). This thesis presents two methods for text segmentation in Web images using colour perception and topological features. The nature of Web images and the implicit problems to text segmentation are described, and a study is performed to assess the magnitude of the problem and establish the need for automated text segmentation methods. Two segmentation methods are subsequently presented: the Split-and-Merge segmentation method and the Fuzzy segmentation method. Although approached in a distinctly different way in each method, the safe assumption that a human being should be able to read the text in any given Web Image is the foundation of both methods’ reasoning. This anthropocentric character of the methods along with the use of topological features of connected components, comprise the underlying working principles of the methods. An approach for classifying the connected components resulting from the segmentation methods as either characters or parts of the background is also presented
- …