7 research outputs found
Identification of Address Blocks Through Multiple Illuminations, Multiple Images and Multicolor Images
We propose a methodology to identify the address labels on flat mail pieces, for the cases where the surface characteristics of the address label are different from the remaining surface. Two methods are used: movement of highlights with differing illumination, and identification of highlights using color information. Reasoning about changes in hue, saturation and intensity, using a new representation of color, allows us to discriminate between matte and glossy surfaces. On grayscale images, we separate diffuse reflectance areas from glossy reflectance areas, using highlights which change with illumination angle. If that does not disambiguate the label, we use color, reasoning that highlights decrease the saturation value and increase the intensity
Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation
We present a computational model and algorithm for detecting diffuse and specular interface reflections and some inter-reflections. Our color reflection model is based on the dichromatic model for dielectric materials and on a color space, called S space, formed with three orthogonal basis functions. We transform color pixels measured in RGB into the S space and analyze color variations on objects in terms of brightness, hue and saturation which are defined in the S space. When transforming the original RGB data into the S space, we discount the scene illumination color that is estimated using a white reference plate as an active probe. As a result, the color image appears as if the scene illumination is white. Under the whitened illumination, the interface reflection clusters in the S space are all aligned with the brightness direction. The brightness, hue and saturation values exhibit a more direct correspondence to body colors and to diffuse and specular interface reflections, shading, shadows and inter-reflections than the RGB coordinates. We exploit these relationships to segment the color image, and to separate specular and diffuse interface reflections and some inter-reflections from body reflections. The proposed algorithm is effications for uniformly colored dielectric surfaces under singly colored scene illumination. Experimental results conform to our model and algorithm within the liminations discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41303/1/11263_2004_Article_BF00128233.pd
Recommended from our members
Environmentally robust multiple camera tracking
A significant growth of the use of surveillance cameras has arisen from both the availability of low-cost home security and post-September 11th security measures. With such a plethora of surveillance cameras available and already in use, tracking a person or object from one field of view to another accurately is a challenging possibility; recognising the same person at different spatial locations, under different lighting conditions, at different scales and orientations. In order to address these challenges and provide a solution, a review of recent and past literature is provided.
The main theme of this research is investigating methods to improve tracking of objects and people in dynamic environments and applying computational techniques to provide solutions to optimise such tracking systems. Image processing techniques are explored and refactored to adapt to currently available single-board computing power. Optimisation methods for speed of computing are investigated, presenting the paradigm of parallel programming during the design of “computationally intense” algorithms. The research also addresses cross-platform software/ server application design.
In controlled environments current tracking systems perform well, however, this project explores methods to take multiple camera tracking to a higher level where they can, in real time, robustly cope with: rapid changes in lighting and track objects between indoor and outdoor scenarios at any time of day or in any weather conditions, severe image occlusion, rapid changes in direction, orientation and velocity of the object being tracked and be invariant to image clutter and noise. Thus the outputs are twofold: track a human/object across multiple cameras and ensure the algorithm is fast enough to run in real time on a modern processor.
This research explores algorithms to deliver colour illumination invariance, also known as colour constancy. Colour illumination invariance can be applied as a pre-processing step to all cameras in a multi-camera environment. The research also investigates experimental assessment of multi-camera performance, focusing mainly on robustness to environmental changes.
There are three main objectives for a tracking algorithm being used in the proposed system. Firstly, the tracking algorithm must accurately detect objects independently of their scale change and rotation. Secondly, the tracking algorithm must accurately detect objects across multiple cameras in different lighting conditions. The third objective for the tracking algorithm is that it must be able to attain a high level of colour constancy. The last objective can be implemented as a pre-processing step to such a tracking algorithm. This research explores the use of the Scale Invariant Feature Transform (SIFT) and the Speeded-Up Robust Features (SURF) algorithm. These algorithms are discussed in detail in the literature review as well as methods for providing colour illumination invariance
Color in computer vision
The use of colour in computer vision has received growing attention . This
paper gives the state-of-art in this subfield, and tries to answer the
questions : What is color ? Which are the adequate representations ?
How to compute it ? What can be donc, using it ? Towards that goal, we
make a deep and up-to-date review of the existing litterature on this subject,
we ondine the important research directions and issues, and we attempt to
evaluate them .L'utilisation de la couleur en vision par ordinateur est un sujet de
recherche qui suscite un intérêt croissant . Ce papier fait le point dans ce
domaine, en essayant de répondre aux questions : Qu'est-ce que la
couleur ? Quelles en sont les représentations adéquates ? Comment la
déterminer ? Que peut-on en faire ? Pour cela, nous faisons une revue
approfondie et très à jour de l'ensemble de la littérature consacrée à ce
sujet en cernant les axes de recherche et les problématiques importantes
et en tentant de les évaluer
Recommended from our members
Colour object search
The visual search process is required when locating an object in some region of space. To perform this search two capabilities must be available: the ability to recognise the object when it comes into view; and a way of selecting these views. Visual search is often complicated by object occlusion and low spatial resolutions of the object. Although the human visual system performs this task effortlessly, the mechanisms of it are not properly understood. Object colour and geometry, however do play an important role. This thesis develops an object search methodology which assumes that a computer vision system captures both wide-angle and zoomed images of the scene containing the object. Since most of the research has focused on object recognition using geometry, this system is purely colour-based. It is not expected that object colour will always give a definitive solution, however database pruning will often occur leading to reduced search times.
The thesis argues that because colour is salient and more resilient than geometry to decreases in spatial resolution, it is more appropriate for visual search when the object occupies a small spatial resolution in an image with a large field of view. It also demonstrates that colour can be used to recognise objects when they occupy most of the field of view; as well as discriminate between database models with similar colour proportions but different region topologies. These conclusions are supported by the results produced by three algorithms, two of which perform colour object search and one that performs colour object recognition.
The first object search algorithm uses image locations containing salient object colours as a method of selecting views. Each of these views are ranked indicating which view most likely contains the object. The second object search algorithm identifies image regions with similar colour and topology as the object. These results are produced in a best-first order. The object recognition algorithm uses an invariant based on region area to identify three corresponding model and image regions. A transformation is calculated to bring the model and object into the same viewpoint where region matches are based on position and colour.
Each of these methods produced good results in complex indoor scenes with fluorescence and/or tungsten filament lighting; also the search speeds were impressive