3,157 research outputs found
Color Constancy Adjustment using Sub-blocks of the Image
Extreme presence of the source light in digital images decreases the performance of many image processing algorithms, such as video analytics, object tracking and image segmentation. This paper presents a color constancy adjustment technique, which lessens the impact of large unvarying color areas of the image on the performance of the existing statistical based color correction algorithms. The proposed algorithm splits the input image into several non-overlapping blocks. It uses the Average Absolute Difference (AAD) value of each block’s color component as a measure to determine if the block has adequate color information to contribute to the color adjustment of the whole image. It is shown through experiments that by excluding the unvarying color areas of the image, the performances of the existing statistical-based color constancy methods are significantly improved. The experimental results of four benchmark image datasets validate that the proposed framework using Gray World, Max-RGB and Shades of Gray statistics-based methods’ images have significantly higher subjective and competitive objective color constancy than those of the existing and the state-of-the-art methods’ images
Max-RGB based Colour Constancy using the Sub-blocks of the Image
Colour constancy refers to the task of revealing the true colour of an object despite ambient presence of intrinsic illuminant. The performance of most of the existing colour constancy algorithms are deteriorated when image contains a big patch of uniform colour. This paper presents a Max-RGB based colour constancy adjustment method using the sub-blocks of the image to significantly reduce the effect of the large uniform colour area of the scene on colour constancy adjustment of the image. The proposed method divides the input image into a number of non-overlapping blocks and computes the Average Absolute Difference (AAD) value of each block’s colour component. The blocks with the AADs greater than threshold values are considered having sufficient colour variation to be used for colour constancy adjustment. The Max-RGB algorithm is then applied to the selected blocks’ pixels to calculate colour constancy scaling factors for the whole image. Evaluations of the performance of the proposed method on images of three benchmark datasets show that the proposed method outperforms the state of the art techniques in the presence of large uniform colour patches
Template matching with white balance adjustment under multiple illuminants
In this paper, we propose a novel template matching method with a white
balancing adjustment, called N-white balancing, which was proposed for
multi-illuminant scenes. To reduce the influence of lighting effects, N-white
balancing is applied to images for multi-illumination color constancy, and then
a template matching method is carried out by using adjusted images. In
experiments, the effectiveness of the proposed method is demonstrated to be
effective in object detection tasks under various illumination conditions.Comment: \c{opyright} 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Recommended from our members
Spatial consequences of bridging the saccadic gap
We report six experiments suggesting that conscious perception is actively redrafted to take account of events both before and after the event that is reported. When observers saccade to a stationary object they overestimate its duration, as if the brain were filling in the saccadic gap with the post-saccadic image. We first demonstrate that this illusion holds for moving objects, implying that the perception of time, velocity, and distance traveled become discrepant. We then show that this discrepancy is partially resolved up to 500 ms after a saccade: the perceived offset position of a post-saccadic moving stimulus shows a greater forward mislocalization when pursued after a saccade than during pursuit alone. These data are consistent with the idea that the temporal bias is resolved by the subsequent spatial adjustment to provide a percept that is coherent in its gist but inconsistent in its detail
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Color Constancy for Uniform and Non-uniform Illuminant Using Image Texture
Color constancy is the capability to observe the true color of a scene from its image regardless of the scene’s illuminant. It is a significant part of the digital image processing pipeline and is utilized when the true color of an object is required. Most existing color constancy methods assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performances are influenced by the presence of multiple light sources. This paper presents a color constancy adjustment technique that uses the texture of the image pixels to select pixels with sufficient color variation to be used for image color correction. The proposed technique applies a histogram-based algorithm to determine the appropriate number of segments to efficiently split the image into its key color variation areas. The K-means++ algorithm is then used to divide the input image into the pre-determined number of segments. The proposed algorithm identifies pixels with sufficient color variation in each segment using the entropies of the pixels, which represent the segment’s texture. Then, the algorithm calculates the initial color constancy adjustment factors for each segment by applying an existing statistics-based color constancy algorithm on the selected pixels. Finally, the proposed method computes color adjustment factors per pixel within the image by fusing the initial color adjustment factors of all segments, which are regulated by the Euclidian distances of each pixel from the centers of gravity of the segments. Experimental results on benchmark single- and multiple-illuminant image datasets show that the images that are obtained using the proposed algorithm have significantly higher subjective and very competitive objective qualities compared to those that are obtained with the state-of-the-art techniques
- …