3 research outputs found

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en c谩maras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepci贸n del entorno. Esta tesis estudia el enfoque en c谩maras digitales convencionales, tales como c谩maras de m贸viles, fotogr谩ficas, webcams y similares. Una revisi贸n rigurosa de los conceptos te贸ricos detras del enfoque en c谩maras convencionales muestra que, a pasar de su utilidad, el modelo cl谩sico del thin lens presenta muchas limitaciones para aplicaci贸n en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos cl谩sicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisici贸n eficiente de im谩genes, estimaci贸n de profundidad, integraci贸n de elementos perceptuales y fusi贸n de im谩genes. Los resultados experimentales muestran la aplicaci贸n exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    Neuronal encoding of natural imagery in dragonfly motion pathways

    Get PDF
    Vision is the primary sense of humans and most other animals. While the act of seeing seems easy, the neuronal architectures that underlie this ability are some of the most complex of the brain. Insects represent an excellent model for investigating how vision operates as they often lead rich visual lives while possessing relatively simple brains. Among insects, aerial predators such as the dragonfly face additional survival tasks. Not only must aerial predators successfully navigate three-dimensional visual environments, they must also be able to identify and track their prey. This task is made even more difficult due to the complexity of visual scenes that contain detail on all scales of magnification, making the job of the predator particularly challenging. Here I investigate the physiology of neurons accessible through tracts in the third neuropil of the optic lobe of the dragonfly. It is at this stage of processing that the first evidence of both wide-field motion and object detection emerges. My research extends the current understanding of two main pathways in the dragonfly visual system, the wide-field motion pathway and target-tracking pathway. While wide-field motion pathways have been studied in numerous insects, until now the dragonfly wide-field motion pathway remains unstudied. Investigation of this pathway has revealed properties, novel among insects, specifically the purely optical adaptation to motion at both high and low velocities through motion adaptation. Here I characterise these newly described neurons and investigate their adaptation properties. The dragonfly target-tracking pathway has been studied extensively, but most research has focussed on classical stimuli such as gratings and small black objects moving on white monitors. Here I extend previous research, which characterised the behaviour of target tracking neurons in cluttered environments, developing a paradigm to allow numerous properties of targets to be changed while still measuring tracking performance. I show that dragonfly neurons interact with clutter through the previously discovered selective attention system, treating cluttered scenes as collections of target-like features. I further show that this system uses the direction and speed of the target and background as one of the key parameters for tracking success. I also elucidate some additional properties of selective attention including the capacity to select for inhibitory targets or weakly salient features in preference to strongly excitatory ones. In collaboration with colleagues, I have also performed some limited modelling to demonstrate that a selective attention model, which includes switching best explains experimental data. Finally, I explore a mathematical model called divisive normalisation which may partially explain how neurons with large receptive fields can be used to re-establish target position information (lost in a position invariant system) through relatively simple integrations of multiple large receptive field neurons. In summary, my thesis provides a broad investigation into several questions about how dragonflies can function in natural environments. More broadly, my thesis addresses general questions about vision and how complicated visual tasks can be solved via clever strategies employed in neuronal systems and their modelled equivalents.Thesis (Ph.D.) -- University of Adelaide, Adelaide Medical School, 201
    corecore