309 research outputs found
Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.
By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”.
Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases.
In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product.
Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery.
The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations
An efficient image retrieval scheme for colour enhancement of embedded and distributed surveillance images
From the past few years, the size of the data grows exponentially with respect to volume, velocity, and dimensionality due to wide spread use of embedded and distributed surveillance cameras for security reasons. In this paper, we have proposed an integrated approach for biometric-based image retrieval and processing which addresses the two issues. The first issue is related to the poor visibility of the images produced by the embedded and distributed surveillance cameras, and the second issue is concerned with the effective image retrieval based on the user query. This paper addresses the first issue by proposing an integrated image enhancement approach based on contrast enhancement and colour balancing methods. The contrast enhancement method is used to improve the contrast, while the colour balancing method helps to achieve a balanced colour. Importantly, in the colour balancing method, a new process for colour cast adjustment is introduced which relies on statistical calculation. It adjusts the colour cast and maintains the luminance of the image. The integrated image enhancement approach is applied to the enhancement of low quality images produced by surveillance cameras. The paper addresses the second issue relating to image retrieval by proposing a content-based image retrieval approach. The approach is based on the three features extraction methods namely colour, texture and shape. Colour histogram is used to extract the colour features of an image. Gabor filter is used to extract the texture features and the moment invariant is used to extract the shape features of an image. The use of these three algorithms ensures that the proposed image retrieval approach produces results which are highly relevant to the content of an image query, by taking into account the three distinct features of the image and the similarity metrics based on Euclidean measure. In order to retrieve the most relevant images, the proposed approach also employs a set of fuzzy heuristics to improve the quality of the results further. The result
Fuzzy logic based approach for object feature tracking
This thesis introduces a novel technique for feature tracking in sequences of
greyscale images based on fuzzy logic. A versatile and modular methodology
for feature tracking using fuzzy sets and inference engines is presented.
Moreover, an extension of this methodology to perform the correct tracking
of multiple features is also presented.
To perform feature tracking three membership functions are initially
defined. A membership function related to the distinctive property of the feature
to be tracked. A membership function is related to the fact of considering
that the feature has smooth movement between each image sequence and a
membership function concerns its expected future location. Applying these
functions to the image pixels, the corresponding fuzzy sets are obtained and
then mathematically manipulated to serve as input to an inference engine.
Situations such as occlusion or detection failure of features are overcome
using estimated positions calculated using a motion model and a state vector
of the feature.
This methodology was previously applied to track a single feature identified
by the user. Several performance tests were conducted on sequences of
both synthetic and real images. Experimental results are presented, analysed
and discussed. Although this methodology could be applied directly to multiple
feature tracking, an extension of this methodology has been developed
within that purpose. In this new method, the processing sequence of each
feature is dynamic and hierarchical. Dynamic because this sequence can
change over time and hierarchical because features with higher priority will
be processed first. Thus, the process gives preference to features whose location
are easier to predict compared with features whose knowledge of their
behavior is less predictable. When this priority value becomes too low, the
feature will no longer tracked by the algorithm. To access the performance
of this new approach, sequences of images where several features specified
by the user are to be tracked were used.
In the final part of this work, conclusions drawn from this work as well as
the definition of some guidelines for future research are presented.Nesta tese é introduzida uma nova técnica de seguimento de pontos característicos de objectos em sequências de imagens em escala de cinzentos baseada em lógica difusa. É apresentada uma metodologia versátil e modular para o seguimento de objectos utilizando conjuntos difusos e motores de inferência. É também apresentada uma extensão desta metodologia para o correcto seguimento de múltiplos pontos característicos.
Para se realizar o seguimento são definidas inicialmente três funções de pertença. Uma função de pertença está relacionada com a propriedade distintiva do objecto que desejamos seguir, outra está relacionada com o facto de se considerar que o objecto tem uma movimentação suave entre cada imagem da sequência e outra função de pertença referente à sua previsível localização futura. Aplicando estas funções de pertença aos píxeis da imagem, obtêm-se os correspondentes conjuntos difusos, que serão manipulados matematicamente e servirão como entrada num motor de inferência. Situações como a oclusão ou falha na detecção dos pontos característicos são ultrapassadas utilizando posições estimadas calculadas a partir do modelo de movimento e a um vector de estados do objecto.
Esta metodologia foi inicialmente aplicada no seguimento de um objecto assinalado pelo utilizador. Foram realizados vários testes de desempenho em sequências de imagens sintéticas e também reais. Os resultados experimentais obtidos são apresentados, analisados e discutidos. Embora esta metodologia pudesse ser aplicada directamente ao seguimento de múltiplos pontos característicos, foi desenvolvida uma extensão desta metodologia para esse fim. Nesta nova metodologia a sequência de processamento de cada ponto característico é dinâmica e hierárquica. Dinâmica por ser variável ao longo do tempo e hierárquica por existir uma hierarquia de prioridades relativamente aos pontos característicos a serem seguidos e que determina a ordem pela qual esses pontos são processados. Desta forma, o processo dá preferência a pontos característicos cuja localização é mais fácil de prever comparativamente a pontos característicos cujo conhecimento do seu comportamento seja menos previsível. Quando esse valor de prioridade se torna demasiado baixo, esse ponto característico deixa de ser seguido pelo algoritmo. Para se observar o desempenho desta nova abordagem foram utilizadas sequências de imagens onde várias características indicadas pelo utilizador são seguidas.
Na parte final deste trabalho são apresentadas as conclusões resultantes a partir do desenvolvimento deste trabalho, bem como a definição de algumas linhas de investigação futura
A Methodology for Extracting Human Bodies from Still Images
Monitoring and surveillance of humans is one of the most prominent applications of today and it is expected to be part of many future aspects of our life, for safety reasons, assisted living and many others. Many efforts have been made towards automatic and robust solutions, but the general problem is very challenging and remains still open. In this PhD dissertation we examine the problem from many perspectives. First, we study the performance of a hardware architecture designed for large-scale surveillance systems. Then, we focus on the general problem of human activity recognition, present an extensive survey of methodologies that deal with this subject and propose a maturity metric to evaluate them.
One of the numerous and most popular algorithms for image processing found in the field is image segmentation and we propose a blind metric to evaluate their results regarding the activity at local regions. Finally, we propose a fully automatic system for segmenting and extracting human bodies from challenging single images, which is the main contribution of the dissertation. Our methodology is a novel bottom-up approach relying mostly on anthropometric constraints and is facilitated by our research in the fields of face, skin and hands detection. Experimental results and comparison with state-of-the-art methodologies demonstrate the success of our approach
Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images
Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research.
In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques.
In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images
Pattern Recognition
A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition
- …