3,073 research outputs found

    Scene Context Dependency of Pattern Constancy of Time Series Imagery

    Get PDF
    A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure

    “Colonial Problems, Transnational American Studies”

    Get PDF
    Excerpt from After American Studies: Rethinking Legacies of Transnational Exceptionalis

    Wavelet-Based Enhancement Technique for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of color digital images based on wavelet transform domain are investigated in this dissertation research. In this research, a novel, fast and robust wavelet-based dynamic range compression and local contrast enhancement (WDRC) algorithm to improve the visibility of digital images captured under non-uniform lighting conditions has been developed. A wavelet transform is mainly used for dimensionality reduction such that a dynamic range compression with local contrast enhancement algorithm is applied only to the approximation coefficients which are obtained by low-pass filtering and down-sampling the original intensity image. The normalized approximation coefficients are transformed using a hyperbolic sine curve and the contrast enhancement is realized by tuning the magnitude of the each coefficient with respect to surrounding coefficients. The transformed coefficients are then de-normalized to their original range. The detail coefficients are also modified to prevent edge deformation. The inverse wavelet transform is carried out resulting in a lower dynamic range and contrast enhanced intensity image. A color restoration process based on the relationship between spectral bands and the luminance of the original image is applied to convert the enhanced intensity image back to a color image. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some pathological scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for tackling the color constancy problem. The illuminant is modeled having an effect on the image histogram as a linear shift and adjust the image histogram to discount the illuminant. The WDRC algorithm is then applied with a slight modification, i.e. instead of using a linear color restoration, a non-linear color restoration process employing the spectral context relationships of the original image is applied. The proposed technique solves the color constancy issue and the overall enhancement algorithm provides attractive results improving visibility even for scenes with near-zero visibility conditions. In this research, a new wavelet-based image interpolation technique that can be used for improving the visibility of tiny features in an image is presented. In wavelet domain interpolation techniques, the input image is usually treated as the low-pass filtered subbands of an unknown wavelet-transformed high-resolution (HR) image, and then the unknown high-resolution image is produced by estimating the wavelet coefficients of the high-pass filtered subbands. The same approach is used to obtain an initial estimate of the high-resolution image by zero filling the high-pass filtered subbands. Detail coefficients are estimated via feeding this initial estimate to an undecimated wavelet transform (UWT). Taking an inverse transform after replacing the approximation coefficients of the UWT with initially estimated HR image, results in the final interpolated image. Experimental results of the proposed algorithms proved their superiority over the state-of-the-art enhancement and interpolation techniques

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    John Ford\u27s Theatre of Ceremony: A Formal Study of His Five Major Plays

    Get PDF

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Humanistic Computing: WearComp as a New Framework and Application for Intelligent Signal Processing

    Get PDF
    Humanistic computing is proposed as a new signal processing framework in which the processing apparatus is inextricably intertwined with the natural capabilities of our human body and mind. Rather than trying to emulate human intelligence, humanistic computing recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications (within the domain of personal technologies) that can make use of this excellent but often overlooked processor. The emphasis of this paper is on personal imaging applications of humanistic computing, to take a first step toward an intelligent wearable camera system that can allow us to effortlessly capture our day-to-day experiences, help us remember and see better, provide us with personal safety through crime reduction, and facilitate new forms of communication through collective connected humanistic computing. The author’s wearable signal processing hardware, which began as a cumbersome backpackbased photographic apparatus of the 1970’s and evolved into a clothing-based apparatus in the early 1980’s, currently provides the computational power of a UNIX workstation concealed within ordinary-looking eyeglasses and clothing. Thus it may be worn continuously during all facets of ordinary day-to-day living, so that, through long-term adaptation, it begins to function as a true extension of the mind and body

    Ridge Regression Approach to Color Constancy

    Get PDF
    This thesis presents the work on color constancy and its application in the field of computer vision. Color constancy is a phenomena of representing (visualizing) the reflectance properties of the scene independent of the illumination spectrum. The motivation behind this work is two folds:The primary motivation is to seek ‘consistency and stability’ in color reproduction and algorithm performance respectively because color is used as one of the important features in many computer vision applications; therefore consistency of the color features is essential for high application success. Second motivation is to reduce ‘computational complexity’ without sacrificing the primary motivation.This work presents machine learning approach to color constancy. An empirical model is developed from the training data. Neural network and support vector machine are two prominent nonlinear learning theories. The work on support vector machine based color constancy shows its superior performance over neural networks based color constancy in terms of stability. But support vector machine is time consuming method. Alternative approach to support vectormachine, is a simple, fast and analytically solvable linear modeling technique known as ‘Ridge regression’. It learns the dependency between the surface reflectance and illumination from a presented training sample of data. Ridge regression provides answer to the two fold motivation behind this work, i.e., stable and computationally simple approach. The proposed algorithms, ‘Support vector machine’ and ‘Ridge regression’ involves three step processes: First, an input matrix constructed from the preprocessed training data set is trained toobtain a trained model. Second, test images are presented to the trained model to obtain the chromaticity estimate of the illuminants present in the testing images. Finally, linear diagonal transformation is performed to obtain the color corrected image. The results show the effectiveness of the proposed algorithms on both calibrated and uncalibrated data set in comparison to the methods discussed in literature review. Finally, thesis concludes with a complete discussion and summary on comparison between the proposed approaches and other algorithms

    Estimating the subjective perception of object size and position through brain imaging and psychophysics

    Get PDF
    Perception is subjective and context-dependent. Size and position perception are no exceptions. Studies have shown that apparent object size is represented by the retinotopic location of peak response in V1. Such representation is likely supported by a combination of V1 architecture and top-down driven retinotopic reorganisation. Are apparent object size and position encoded via a common mechanism? Using functional magnetic resonance imaging and a model-based reconstruction technique, the first part of this thesis sets out to test if retinotopic encoding of size percepts can be generalised to apparent position representation and whether neural signatures could be used to predict an individual’s perceptual experience. Here, I present evidence that static apparent position – induced by a dot-variant Muller-Lyer illusion – is represented retinotopically in V1. However, there is mixed evidence for retinotopic representation of motion-induced position shifts (e.g. curveball illusion) in early visual areas. My findings could be reconciled by assuming dual representation of veridical and percept-based information in early visual areas, which is consistent with the larger framework of predictive coding. The second part of the thesis sets out to compare different psychophysical methods for measuring size perception in the Ebbinghaus illusion. Consistent with the idea that psychophysical methods are not equally susceptible to cognitive factors, my experiments reveal a consistent discrepancy in illusion magnitude estimates between a traditional forced choice (2AFC) task and a novel perceptual matching (PM) task – a variant of a comparison-of-comparisons (CoC) task, a design widely seen as the gold standard in psychophysics. Further investigation reveals the difference was not driven by greater 2AFC susceptibility to cognitive factors, but a tendency for PM to skew illusion magnitude estimates towards the underlying stimulus distribution. I show that this dependency can be largely corrected using adaptive stimulus sampling
    • 

    corecore